> JavaScript just happened to be a nice fit to evented, non-blocking I/O, and a single-threaded event-loop based environment.
I keep seeing variations on this statement, and I have never figured it out. What, exactly, is it about JavaScript that makes it particularly appropriate for this paradigm?
The way I see it, the only abstraction JS provides that helps with this is first class functions, which are also present (and often richer and more robust) in many other languages.
Not just first class functions -- anonymous functions, and closures.
Python has first class functions, but not anonymous functions, since lambdas aren't full functions.
And Python doesn't have closures, because you have to use "global" or "nonlocal" (in Python 3) to assign a a variable in an enclosing scope.
These features let you write concurrent state machines in JS without explicit state -- you basically use the stack as state.
Python was pretty early in terms of event loop programming for interpreted languages -- asyncore in the stdlib, and Twisted. But I agree that the node.js style is nicer than asyncore, Twisted, or Tornado. I prefer coroutines over state machines, but if you're going to write state machines, than doing it with anonymous closures is nicer than doing it "manually" with classes and so forth.
Tcl was even earlier, but Tcl is pretty foreign to most people, and was more of an embedded language than a scripting language. Pretty sure Tcl has full anonymous closures, but it is a bizarre language in some other ways.
I haven't seen the event loop style in Perl but I think you can do it. If someone has an example I'd like to see it. To me, Perl only barely qualified as a real programming language because you can't even name your function params? At least when I used it, which was over 10 years ago.
And they're completely equivalent. Why would someone actually like the latter, except you're forced to use it because of the prevalence of JavaScript?
`asyncio` is not only easier to use than the traditional "manual" state machines with classes, but also much faster than something like `Twisted`. In my crappy local benchmark[1], it easily beat `Twisted` in a considerable margin. That may be because of the flaws in my test code, but still their performance is on par. I see no points in relying on callback hell, at least in Python, except the lack of the libraries that support asyncio, which makes me a little bit sad.
Yeah but node.js predates asyncio by about 5 years. Python was early to the game with asyncore and Twisted (circa early 2000's), but the async I/O support stagnated and left a big hole for node.js.
Part of the reason for stagnation was because you really need new language features -- i.e. yield/send, yield from, etc. JavaScript already had the necessary language features for its concurrency model.
That's a good point. I also think `asyncio` was way too late in the game. I just wanted to mention the feature that was missed from your comment, which I think is not fair to Python.
Coroutines are much better suited to event driven, non blocking IO code. Sorry for the shameless plug but I recently released my hobby project that uses Lua coroutines for nonblocking HTTP server: https://github.com/raksoras/luaw
Lua was quite pleasant to work with in this context!
One thing yield/await does is that if plays nice with existing control-flow mechanisms in the language. You can put them inside a for loop, if statement, try-catch block, etc. With promises all of those have to be reimplemented in a library and all your code needs to be converted to the "promisified" version of things.
The different syntax also prevents you from creating functions that can be reused by both sync and async code.
The big thing promises and other callback-based control flow libraries do is that they fix exception handling in async code (its a PITA to add error handlers in every callback when writing async code by hand). They also let you avoid some of the awkward "pyramid of doom" nesting but thats not really killer feature because you can also achieve that by using tons of named functions.
async/await are coming with ES7 and available today with BabelJS and other transpilers. Though not part of core, they rely on thenables/promises, which handle the reject/resolve which get composed with generators.
The required bits are in place... alternatively you can use co/koa which uses generators along with yeilding promises to act in a similar fashion.. it does work pretty well, and will only get better.
I do think promise and deferred are huge improvements to ergonomics, but still... I cannot completely love the code bloat (which is much smaller and easier to reason about than callback hell, of course) that is caused by those libraries. I much prefer Haskell's monadic interface or Python's `yield from`. But that may be my personal preference.
Grownups in charge of the lang are listening and they're doing their best to incorporate these features (async/yield) in the language as soon as possible.
On the other end of the spectrum, Ruby blocks (and all the attached machinery) feel to me like a much nicer abstraction to work with. Go's goroutines and channels show giving a language purpose-built concurrency primitives is a really nice model (I expect Erlang would be another good example of the same, but never tried it), and the monadic approach in Haskell makes it so that you could almost directly translate node.js code and end up with something far more readable.
TBH Python is a bit of an exception there. Most languages with first-class functions (specially more modern ones) will also come with anonymous functions that aren't intentionally crippled (like Python's lambda is) and will have more traditional lexical scoping instead of local-by-default.
Python isn't an exception among languages that people in industry actually use.
For "production grade" server software, which is driving the newfound interest in concurrency, the following languages cover 99% of code written in the last 20 years: C/C++/Java/PHP/Perl/Python/Ruby and the Microsoft Stack (C#, VB).
None of them have anonymous closures. (C# might, but it's also newer, and it's not the the prevailing style of concurrency in any case.)
This is basically JavaScript's Scheme heritage showing through. Anonymous closures are old hat for people who went through a CS program teaching Lisp, but they're not by any means standard in industry.
One of the main reasons that none of the languages needed this feature is because the predominant paradigm for concurrency for the last 2 decades was threading, not state machines and callbacks.
C#'s had closures and anonymous functions for almost a decade, Java's got them properly now with Java 8, but you could always hack them with anonymous inner classes. C++ got them in C++11, etc. Hell, even in C, you could hack something resembling a closure with a struct and a function pointer.
There is no such thing as an "anonymous closure". There are only anonymous functions which may or may not create a closure. Early LISPs either had no way of creating closures, or did so through some sort of function or other mechanism. It wasn't until Scheme came around that lexical closures really became a thing in Lisp world.
Perl and PHP both have closures. In fact, Perl is closer to Scheme/Lisp than most languages (at least Perl doesn't screw up scoping like JavaScript, replace first-class functions with second-class-bastardizations like Ruby, or do whatever Python thinks it's doing with lambda... ugh)
#!/usr/bin/node
function main() {
var a = 1;
console.log(a);
// named closure
function foo() {
a = a + 1;
}
foo();
console.log(a);
// anonymous closure
(function() {
a = a + 1;
})();
console.log(a);
}
main();
This. A thousand times, this. It's a really good model for something that is I/O bound because it is easy to reason about. This makes the model perfect for events in the browser (both DOM events and XHR).
It's not that Javascript is especially suited to this model, but thanks to the browser, it was already the most used implementation of this model.
I think this combined with the fact that JS engines were already relatively isolated from the browser itself and able to be embedded with other software (in the case of node.js libuv/libev) made it a really good option. require+npm added a lot more to the mix.
But it was the broad availability of mindshare of those developers who at least knew some JS that really kicked it over the top, given JS as a DSL for I/O bound applications.
Perhaps they're used to it, but are they really fluent in it? Because I keep seeing otherwise bright developers having to use abstraction upon abstraction to keep from screwing up event driven callbacks.
We're in something of a JS framework boom (or hell, if you wish) right now. These people are developing as if their shit libraries won't exist in 3 years. Hence, the total lack of documentation and ongoing maintenance from so many of them. I've already had to maintain code that used abandoned JS frameworks. I'm a little scared of what's around the corner here...
JS isn't even particularly good at evented, non-blocking I/O: chains of asynchronous calls become series of ever-more-indented function(){}s. Languages with coroutines (Lua) or first-class continuations (Scheme) can chain asynchronous calls in a more straightforward style.
There are dozens of very decent ways to handle flow control with JS - any time you see indented function chains it tells you this might be poor quality code. For better or worse you can choose a library or write your own - Async.JS, Promises, etc all deal with this. Some abstractions like Async.Auto handle branching paths (series -> parallel -> series) very elegantly.
function handleRequest(req)
data = db.fetch("user")
return render("template.html", {user=data})
And if db.fetch and render (both made up functions) are written using the coroutine paradigm, your code would automatically be evented and non-blocking, no need to use libraries to handle your callbacks, and your code has the same style it does if it were blocking.
In db.fetch it uses the "yield" keyword that would tell Lua it is about to do an async operation it should wait till it finish before continue running Lua code, meanwhile run something else instead, making it non-blocking and you can run many of these Lua programs at once on a single thread, all of them concurrently.
One nice thing about Lua coroutines is that they are "stackful". You don't need to chain "yields" all the way up the call stack (like Python's "yield from") so its possible to create functions that work with both sync and async callbacks.
function mapList(xs, f)
local ys = {}
for i=1,#xs do
ys[i] = f(xs[i])
end
return ys
end
mapList({10, 20, 30}, function(x)
return db.fetch(x)
end)
In JS you can't use async functions inside of Array.prototype.map. You need to use a separate async-aware method from your favorite async library instead.
I have used this idiom to great results in my project (https://github.com/raksoras/luaw). Basically, request:read() hooks up into libuv (node.js' excellent async IO library) event loop and then yields. Server is now free to run next coroutine. When socket underlying the first request is ready with data to read libuv event loop's callback gets fired and resumes original coroutine.
That's awesome. You've me hooked. For postgres database access, what library do you suggest? Also, I remember lua-nginx can also do non-blocking IO in blocking style code. (http://wiki.nginx.org/HttpLuaModule, http://openresty.org) How does luaw compared to that?
Right now it's just a HTTP server and REST framework. It's a very first release and I don't have any DB drivers for it yet- they would need to be non-blocking as you mentioned.
I have plans to write dbslayer like "access DB over REST" service as a companion to Luaw so that it can use any and all databases that have JDBC drivers available without having to write non-blocking driver specially for each new database. This kind of arrangement where DB connection pooling is abstracted out of application server itself has other advantages related to auto-scaling in cloud and red/black or "flip" code pushes at the cost of slightly more complex deployment.
All depends on how much spare time I actually get :(
With BabelJS, or the co modules (used with generators) you can get a very similar model...
app.use(function *(){
var db = yeild getDbConnection();
var data = db.query("procname", this.request.query.param);
this.body = JSON.stringify(data);
});
This of course assumes you have a db interface that uses Primises and are using koa for your web platform in a version of node that supports Generators, or are using something like BabelJS or traceur.
I prefer Common Lisp, but I can only dream about how wonderful Scheme in the browser would have been. JavaScript is hideous, simply hideous: it's a hack, piled atop a thousand compromises, wrapped up in a million curly brackets.
Every time I use JavaScript I imagine how good life could have been were it Scheme. Every time I have to use JSON I imagine how great life would have been were we using canonical S-expressions[1] instead. There's one good thing—and only one good thing—about JavaScript: it's incredibly well-deployed. As another commenter mentioned, JavaScript is an object lesson in path dependency.
In many ways, it's appropriate that it has 'Java' in the name: it's popular, but it's ugly. There are better languages; indeed, nearly every other non-Turing-tarpit language is better than either Java or JavaScript: Lua, Lisp, TCL, Python, Rebol, Erlang.
Lots of indented function calls is just poor design. You get the same thing in Scala when you start dealing with lots of Futures inappropriately.
Just choose a decent abstraction for callbacks. I prefer promises, but async.js does a good job as well.
The `flatMap` function on a Future in scala might be built into the language, but it is implemented and scala and not significantly different (in my mind), to Bluebird being implemented in javascript.
> What, exactly, is it about JavaScript that makes it particularly appropriate for this paradigm?
V8.
And it's a pretty powerful language. C-like syntax (familiar), Lisp-like (powerful/flexible), it's everywhere and it's fast (thanks to V8). Really, if it weren't for Chrome, Node.js wouldn't be a thing, and JavaScript wouldn't be ruling the world.
Right, but V8 doesn't make the language itself particularly appropriate. (And calling it "lisp-like" is way over-reaching. It has first class, anonymous functions and closures, and that's it.)
To me, the node story seems to boil down to "V8 was a readily-available fast runtime and people who already knew JS flocked to it" — which is fair enough, but hardly an argument for how appropriate JS is, as a language, for this sort of programming model.
"V8 was a readily-available fast runtime and people who already knew JS flocked to it"
That's basically all there is to a language's success. We like to debate syntax and closures and continuations and type systems and immutability and concurrency models on programming language message boards, but realistically, nobody cares. The two questions they have when they encounter a new language are "Can I learn this in a weekend?" and "Can I build cool things that other people would actually want to use?"
If you look at the history of programming technologies that have "won", the list includes C++, Java, Javascript, PHP, C, Objective-C, and to some extent Perl, Python and Ruby. The first 4 of those are terrible from a language-design standpoint, but the two things they all had in common were a readily-available reasonably-fast runtime (except PHP, and that was "fast enough" for the things people use it for), and a familiar syntax. C, PHP, Javascript, and Objective-C also had the benefit of being the "native" language for a major application platform, which seems to be the other major critical success factor for a new language.
There's still a defense of languages here, and I'm not sure it's due. You chalk up the efficacy of V8 as a delivery mechanism to it carrying portable knowledge in (already knowing JS) and it's usefulness as a general tool.
There is a possibility that this viewpoint doesn't leave any room for: the possibility that languages make nearly no difference. Perhaps people are savvy, and interested in more complex things, but the various languages and platforms don't bring any concrete value. Instead of being a function of how widely-ranging people's interests are, it's instead a question of whether anyone is actually making real tracks away from a center, a center whose nebulous nature is only permitting the forging of false distinctions in.
Rather than marking time by languages, perhaps it's more interesting to mark what we focused our language use on?
Don't forget Basic!
Lot of people learned to program with it, and did cool things with it, in the times of 8-bit computers.
And it is still used, from DarkBasic and similars, to MS' variations on VisualBasic (VBA and such).
You can make smart, high level languages like Haskel or Ceylon, but people will generally prefer dumb, easy to learn languages like Basic, PHP or JavaScript, despites their limitations.
Result: instead of making a nice car, well designed and looking good, they put big wheels and powerful engine on soap box cars! :-)
When it comes to performance LuaJIT beats V8 hands down. Its really down to the "its everywhere", IMO.
And I'm not a big fan of that theory that JS is Lisp-like. The only thing thats particularly lispy about JS is the first class function and even then they are a bit fucked up because of the wonky function-local scoping rules and the "this" keyword.
Not really. He's just using that to note that JS is not really a Lispy language because of this. My choice for "why JS != Lisp" is how the Array functions return whatever you passed to them and not the array itself, so you can't write single-expression multi-modifications to an array. Unless you use the hellish Array.splice method. Which I'd argue is even worse.
Yes, JITs in WebKit and Gecko showed up around the same time as v8, or a little earlier even. And the WebKit JIT (Nitro) beat v8 on various benchmarks.
It's still possible v8 somehow sparked those other JITs, if before it was public the developers of WebKit and Gecko learned of v8 and stepped up their game. But, I don't know if that's true or not.
Anyhow, it's an impressive achievement that people still mention v8 as the reason JavaScript is fast, when it isn't the fastest today, and wasn't the first to be fast historically. Although, it is certainly a worthy VM, just one among several.
Almost every popular programing environment provides an incredibly unsound shared-memory multithreading library in the distribution, which winds up being heavily used by every other library--even sometimes built-in ones. Javascript in both the browser and not refrains from this, making it generally safe to use not-invented-here code.
In other languages, you could use libuv or tornado or aio or whatever, but it's like throwing away the entire ecosystem and using an obscure language anyway.
"What, exactly, is it about JavaScript that makes it particularly appropriate for this paradigm?"
Devs with experience writing interactions with the DOM had already convinced themselves that callbacks nested ten deep was a totes OK way to write code, "releasing Zalgo" and all.
In JS, you know that library code that you call almost certainly won't do blocking I/O.
If you're trying to do non-blocking async stuff in other languages, you have to hope/check that none of your code and none of your dependencies try to do blocking I/O. (Unless you're using something that does async under the covers like haskell or go)
JS is in every web browser. thats why. number one. period. because there are plenty of other language syntaxes compatible with non-blocking mono-threaded EDA.
I keep seeing variations on this statement, and I have never figured it out. What, exactly, is it about JavaScript that makes it particularly appropriate for this paradigm?
The way I see it, the only abstraction JS provides that helps with this is first class functions, which are also present (and often richer and more robust) in many other languages.