Hacker Newsnew | past | comments | ask | show | jobs | submit | g00z's commentslogin

These are a blast. I went through a phase in highschool where I exclusively played 90's CRPGs. There are some real gems that find a unique playstyle with tons of freedom due to how low fidelity the games are, while still being visually engaging and beautiful. Definitely check out fallout 2 if you haven't tried it yet, it's one of my favorites!


The GT people don't like boxing what it is into any specific category. Which isn't great for new users but also makes sense when you use it.

Because it's really not any one thing other than an environment that is built from the ground up for building highly explainable systems, including itself. Think about it like a "meta-tool", or a tool for building tools. Similar to how an operating system is a piece of software used for writing other pieces of software easier.

So naturally this type of workflow lends itself to data analysis. However it's no less applicable in building p2p networks, or working with codebases in other languages.

Regarding sharing code, it's actually really straight forward. Your classes aren't stuck in images, but are normally stored as normal plaintext files and committed into git. The library story is arguable better than in most other languages, because of how flexible smalltalk is.


I would add one thing that makes GT very different from other tools and is very hard to recreate, is that these tools are ACTUAL objects and the things you see, are also ACTUAl objects, and not just a dummy representation of them as you see in other dataviz tools like plotting libraries or shell scripts.

This means your tools and visualizations are just a specific context-specific view of your objects. Meaning you aren't limited in how these tools can interact with said objects, because you are never working with static data, it's always with the actual objects.

It's hard to put into words, but it's similar to the difference between println debugging and a lisp repl or smalltalk debugger. They technically do the same thing but the actual implementation of them makes a world of difference.


Actually it wouldn't be difficult to add similar view protocols to Python objects as well (I used GT extensively a couple years ago). Pretty much everything is possible, but the live debugger and driller would be really difficult to replicate, which is where GT really shines for me. Alas it was just too much to properly bridge it with Python, where the majority of my work lies, and GT becomes overwhelmed when passed a fraction of data Python handles with ease.


Simple views sure, but tools like the driller or debugger are great examples of what I'm trying to highlight about when I say having the views work over actual objects is really important.

Because if it wasn't for the fact the graphical stack was implemented as smalltalk objects, you couldn't build tools like the driller or debugger since they would have to be implemented as a secondary piece of software that loses the original context.

Like for example, I built a custom tool for myself when I was working on this p2p network and had a section of the codebase with some non obvious control flow, since it was handling multiple different p2p networks at the same time. Normally this is where you include a diagram in the docs, but in about an hour I built a custom code editor for the class, that visualized all the control flow and explained the cases in a flow diagram by simply introspecting on the methods defined in the class. And this tool never fell out of sync like a static diagram, since it wasn't hardcoded by me. And from that point on, I worked within this tool whenever hanlding anything related to this.

And fwiw, the python story is pretty seamless from my usage of it a few months ago. I was able to integrate and use python libraries into this project without much hassle.


Over the last couple of years we added a reasonably extensive infrastructure for working with Python from within GT. You can define contextual inspector views both in GT and in Python for live Python objects, for example. There is also a debugger for Python.

Also, GT is now also a distributed Smalltalk system, too. We use it in productive settings to compute large jobs on large data sets :)


The example of electric is really good. It's something that would require bonkers amounts of effort to implement in ts/js, but instead 15 contributors could build in clojurescript.

And I think the core thing that makes it possible is that the design decisions in lisp, optimize to provide expressive power to the user, because as a language designer I cannot perfectly plan what you will build.

Which in turn enables the users to build things that would require crazy effort in more traditional languages.

And yet, most programmers have an irrational fear of built-in metaprogramming because it "makes hard to understand code". But seemingly nobody is afraid of bad metaprogramming, like code generation. Which I would argue is more difficult to understand.

Seemingly, any sufficiently hard problem requires some new language constructs to be built to discuss it in a way that makes sense. A good example of this is react. It's a framework in JS, but when using react you aren't really writing normal js. You are writing react. This is magnified when using jsx, which few people will complain about using, as it makes the process of writing react code much more enjoyable.


> And yet, most programmers have an irrational fear of built-in metaprogramming because it "makes hard to understand code". But seemingly nobody is afraid of bad metaprogramming, like code generation. Which I would argue is more difficult to understand.

The difference here is that “bad meta programming” happens when the people writing thorny libraries and frameworks clamor enough for meta programming that the language implementers shoehorn in a subpar implementation. These people know how important it is to programmatically generate and modify code, because they had problems that just couldn’t be solved without that, and the other language users never see it (very few programmers in most non-Lisp ecosystems are curious about the code of libraries they use) and so don’t care whether or not it exists.

On the other hand, a language with pervasive meta programming is going to have cases where you could do a task without that feature, but can do that task much more easily with it. Meaning that of the programmers using that language, everyone in the subset of “programmers that like to minimize what they need to learn as much as possible” will be exposed to code using meta programming.

Most of these engineers will not have the prior experiences to understand that “ok, we need to learn meta programming either way because it’s essential to many important coding tasks and so has to be in the language; it’s not an increase in learning/comprehension difficulty to also have it in code that doesn’t strictly require it”. As such, their reaction is simply “why do I have to learn how to work around this extra thing?! Get rid of it, now!!!”.


I'm curious if these problems are more about flexibility.

Because I find with any well designed and flexible program in a statically typed language, you end up with the same "tube of something" as you have in your dynamic language examples.

Take any large generic framework in rust, the types are extremely generic, dispatching on more generic traits. It's hardly a tag that says: "blood, to heart" unless it is highly coupled and working in one area.


> generic framework

Yet at some point, the template is instantiated--in the text!--and you can fill-in-the-blank by reading instead of having to do a runtime probe. For large systems that have a lot of preconditions, the "runtime probe" approach can be a thick fat time-waster (i.e. caressing the program into a state where something is even in the tube).


But do you really think the runtime probe is less effective in an environment where you have a good way of interacting with the program at runtime? Or is it just bad tools tainting the experience?

Because I find that in a good dynamic environment, it's easier to figure out how to use something complex when compared to trying to uncover how and why the types line up.

The absolute extreme being something like smalltalk, where you just guess how it should be used and then fix it in the debugger until it works.


> But do you really think the runtime probe is less effective in an environment where you have a good way of interacting with the program at runtime?

No matter the tool, it will be very difficult to try out all code paths to figure out what can be in that tube. The difficulty scales with the size / age of the application.

Take your typical 20 years old enterprise application. Nobody from the original team is around. Whole generations of programmers worked on the project, each with their own favorite patterns / code style / level of diligence. There's a massive amount of features, half of which you don't even know about. Test coverage is spotty at best. You see some common function triggered from hundreds of places, it makes a huge difference in your ability to reason about it if you have types on the data being processed or not.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: