Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ahead-of-time compilation of a dynamic language with LLVM will scarcely be faster than an interpreter. This is a very common misconception. Good performance for dynamic languages requires dynamic techniques -- polymorphic inline caches, tracing, dynamic type inference, and so on.


So use a static language with type inference instead.

Most people don't know about these things because of the poor / non-existent education in computer science and lack of any recognized qualifications, but that's no excuse. One day this branch of engineering will need to grow up.


Err, no thanks? I like my dynamically typed, interpreted + JIT compiled language thank you very much. I'm not ignorant of the type inferenced languages (in fact I use one almost every day), I just don't like it as much.


Wait, you're both wrong.

It's perfectly possible to write a statically-typed language that looks exactly like Python or Perl or Ruby. It's just that nobody's done it yet.

So in the mean time, you have to pick between expressiveness and ease of development versus safety and speed. There's no concept in computer science that prevents us from having all four other than "it's hard and people are lazy".

Haskell and Go are good examples of good progress in this direction, though.


> It's perfectly possible to write a statically-typed language that looks exactly like Python or Perl or Ruby. It's just that nobody's done it yet

Really? You should definitely show us how.

I've been thinking about the issues of dynamic/static languages for a long time, and haven't yet thought of a unifying design (maybe I'm just stupid).

The big difference isn't type inference/dynamic type system, it's the difference between compile-time and run-time. In Go and Haskell, classes are compiled. In Python, classes are created at runtime, so if you wanted type-inference in such a language, it would have to be performed at runtime... Also, proper typing of OO languages is very complicated, see Scala (e.g. sometimes you want the function to return the type of this object, even when it is inherited...).


Maybe it could look like Python, but for damned sure it wouldn't have the same semantics, unless by statically typed, you really mean "trivially typed".


Why? Python already requires type annotations. For example, if you have a string "42", you can't just add 1 to it to get the number 43, you have to say int("42") + 1 or it will throw a type error.


It throws type error at run time, not compile time.

To be a statically type language you would need to throw the error at compile time.

How do you raise (or not raise) error of this code at compile time and still make it feel like dynamic type language?

    x = (rand() > 0.5) ? 42 : "42"
    y = x + 1;


The type of x is Int|String

The type of (+ 1) is Int -> Int

The type of y is Int

Because x is of type Int|String and it's passed to a function that can only operate on Int, the program fails to compile.


I highly reccomend you read, "Localized type inference of atomic types in python" by Brett Cannon (2005), it shines a light on just how much this doesn't work.


So when you said, "looks just like Python," you literally meant, "lexically looks like Python," because you aren't building a language that works just like Python.


That is not a type annotation. Python is simply strongly typed that has nothing to do with whether or not the language is dynamically or statically typed.


PyPy's RPython language is a subset of Python that can be compiled. Unfortunately, RPython was designed to bootstrap PyPy and not general-purpose applications.

Python 3 added optional type annotations for function parameters and return types. Unfortunately, Python does not use these annotations for compile-time or run-time type checking. The annotations are just for documentation or tools that want to analyze the type attributes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: