Worth noting that this implementation doesn't incorporate a tracing garbage collector. Any reference cycles in the interpreted Lisp code will result in memory leaks.
Your language is the first that I know of that compiles to Rust. Can you talk about that? Does the generated code have to pass the borrow checker or did you design around it?
After the translation of the source code to AST, I have walk through the AST to validate the code as much as possible.
The generated Rust code should be valid Rust code that compiles with no error. I only have 2 exceptions to that rule:
---
Function calls:
I haven't found a way to verify that the identifier used resolve to a function with the correct signature (it would need some sort of partial evaluation and static type checking I don't have).
Rust FFI:
Letlang will have the ability to call native Rust functions (allowing me to rely on the Rust ecosystem to implement the stdlib), but it expects functions to have a specific signature.
---
Those are the 2 cases where `cargo` can throw an error. So I (plan to) include source mapping comments in the generated Rust code. After getting the output of `cargo` as JSON, I can identify the error type and location, then using the source mapping info, I can pinpoint what AST node generated the wrong code.
On a semi-unrelated note: Every Letlang module is a single Rust crate. The compiler generates a cargo workspace, and a cargo project in that workspace for every module. It makes dependency management (and my life) easier.
It's funny how people are intrigued by my choice of targeting Rust, while other compilers targets C, Javascript, or other higher-level languages :P
But I guess it's because I use the term "compiler" and not "transpiler". Hint: I hate the word "transpiler", which is just a synonym to "compiler" IMHO.
I am not questioning it in as casting doubt or making you defend it, I want to learn from it. Targeting a rigorous safe language as the output of a compiler is wonderful. You get no guarantees when you target C or JS.
By targeting Rust, it also makes it natural to embed inside of a Rust macro and weave it into Rust like `inline_python` does.
There was a position blog post awhile ago arguing that language designers should target Rust as their compilation target, but a quick search doesn't turn it up.
> New programming languages with a system-level compile target should choose Rust over LLVM. Targeting Rust can give new languages free package management, a type system, and memory safety while not imposing too many opinions on the language's runtime. With more work on languages, tooling, and Rust compiler development, we can create an ecosystem of beautifully interoperable programming languages.
IMHO, for having written a Lisp-like DSL that is heavily used in our product, the beauty of Lisp is not in spending a week writing a bug-free parser.
It actually resides in the regularity and the leanness of the generated AST, that makes it very easy to add syntax, forms, builtins, etc. and thus iterate reliably and fast according to emerging use cases and requirements, virtually without ever having to worry about breaking the grammar.
Also, it's pretty easy to learn for end-users, even those not overly familiar with programming.
The benefit of parser combinators is that they allow you to write "tree shaped code" to parse a syntax that doesn't map cleanly to a tree, ex: a c-style syntax.
My argument was that if the article wanted to show off the rust parser combinators, another syntax would have done the job better.
And to be fair, reader macros are a pretty contentious feature in themselves, as they can be used for great evil. I was mainly making the comment to be educational. Your work here is impressive regardless.