> What SVN didn't fix was the fundamental centralized model. You could not commit without a network connection. The repository was still a single server. "Working offline" meant "reading-only", you could browse history but not record any new work, and the day the server was down was the day the whole team waited.
FWIW this is how most projects work anyway. And IMO Subversion is still the best VCS when you have a lot of large binaries (the various extensions to Git like git-lfs are just hacks that graft a separate half-baked version control system to it and add further complexity to an already annoyingly complex system). I remember working at a gamedev company in early 2010s and out of curiosity i tried to put everything in the 250GB perforce workspace in a git repository only for git to choke and die before it managed to do anything. In comparison, ~5-6 years earlier i worked briefly at a game porting studio where every single game they had ported (which i'm almost certain went all the way back to the 90s), including all data and source (and these were AAA games, not tiny indie games), were into a single Subversion server.
Unfortunately Subversion lost the VCS fashion wars and nowadays it barely seems to have any development. I still use it for a few projects where i do have a lot of binary stuff, but most new things are in Git. I also have a bunch of stuff in Fossil (which also did handle binary files better than Git when i tested it years ago, though not as good as Subversion or P4) but nowadays i convert them to Git when i need to share because, well, pretty much everyone expects Git (and projects such as Codeberg and Forgejo make sharing and self-hosting easier).
Ironically the "fundamental issue" mentioned above was solved not too long ago with Subversion as nowadays you can have multiple "changelists" and each changelist is a full (hidden) SVN repository by itself, allowing you to do commits (as "shelving") and such locally and then push to remote when you're done. AFAIK changelists can also coexist (unlike Git where you can only work at one branch at each time). Unfortunately since Subversion is basically barely held together, only the command-line UI provides that functionality (at least in FLOSS clients) and even TortoiseSvn didn't seem to support it last time i checked.
You could always keep a local svn repository and commit to that, if you really wanted to commit without connectivity. But in practice most people don't, as evidenced by the success of github, which grinds many development processes to a halt every time it is down.
It also ignores svk, which is (was?) a popular add-on to svn, which provided a convenient way to do this and replay all the commits to the central svn repository when connectivity allowed.
> which is really integrated into a programming language or part of the standard library.
Not a programming language, but the programming language: C. The toolkit needs to be available as a C API because that lets it a) provide stable API and ABI and b) provide bindings for multiple other languages without having to jump through hoops, especially for other compiled languages (binding Qt to Python might be easy, but bindings to something like, e.g. Free Pascal requires an intermediate C++ library that exposes a C API that itself can be used from Free Pascal - and applications need to distribute it that library too).
Unfortunately the vast majority of GUI toolkits are not writtne in C but in C++ or some other language that makes using them from anything than the developers' favorite language a pain. And really the only mainstream that is written in C is GTK which has a complete disregard for proper backwards compatibility.
(you may think that a library only needs to expose a C API but it can be written in any language, however for something that doesn't have any widespread availability, you may want to link to it statically - however that can be an issue with anything outside C/C++ - as an example i recently tried to make a FLTK backend[0] for Lazarus since FLTK is a C++ library that the devs encourage to link it statically and it would allow creating GUI programs that are self-contained binaries... but statically linking a C++ library -for which i had to first make a C wrapper- in a non-C/C++ turns out to be a PITA under Linux if you are not g++ as that does passes a bunch of magic flags to the linker and impossible under Windows - or at least msys2, so i gave up).
I like, that you also added backwards compatibility and ABI stability, two very important and valid points. There is to this day the joke, that the best way to write a binary GUI app for Linux is to target the Win32 API and run it via Wine, if you care for a stable platform. ;-)
> You can't even display an image in a terminal without a non-standard terminal like Kitty or iTerm.
Sixels are supported by many terminals[0] (several of the terminals mentioned that do not support them are based on GNOME VTE for which support is in the works and, based on the bug tracker comments, it seems to be almost done).
This includes xterm which is probably the most "standard terminal" on X11 you can get.
TBH sometimes i feel like i'm "emotionally attached" to Mistral's models because i always end up using them :-P. However that is because, as you wrote, their small models (i only use local stuff) are very strong. In fact i was trying Qwen3.6 27B recently and while it is nice that it can do tool calls during the reasoning process (i had it confirm its thoughts by writing Python code) it often ended up confusing itself (regardless of tool calls) during reasoning, ending up in loops where it questions itself over and over endlessly.
Devstral Small 2 however just works, for the most part. Qwen3.6 27B can probably handle more complex tasks (when i asked it as a test to write a function that checks for collision between two AABBs in C and gave it a tool to call Python code for confirmation, it actually wrote a Python script that writes C code with the tests, then calls GCC to compile the C code and runs the binary to run the tests, which is something Mistral's small models couldn't do) but i always felt i can just leave DS2 doing stuff in the background (or when i'm doing something else) and it'll produce something relatively useful whereas the little time i spent with Qwen3.6 27B it felt more "unstable" (and much slower, both because of literally slower inference and because of endless reams of text).
Recently i also started using Ministral 3B and 14B - these can do some reasoning too and for very simple stuff Ministral 3B is very fast (i actually didn't expect a 3B model to be anything more than novelty) and have some vision abilities (though they're quite mediocre at vision so i haven't found much use for this - passing something via GLM-OCR to extract all text and feed it to another model feels more practical).
Also as i wrote in another comment, every Mistral model i've tried never questioned me, which i certainly prefer
FWIW personally i prefer this. When i tried Qwen3.6 and asked it a few questions, while it did respond, it was ADAMANT i should do something else when i really wanted an answer to the question i made. It felt like when you search something and a stackoverflow answer about what you search for comes up and the most upvoted answer is about using/doing something else - when you want a specific answer to that specific question, not something else.
Meanwhile Devstral Small 2 just answers the damn question.
I don't want to have to convince my computer to do what i want it to do, i want from it to do what i ask it to.
> It felt like when you search something and a stackoverflow answer about what you search for comes up and the most upvoted answer is about using/doing something else - when you want a specific answer to that specific question, not something else.
Don't you think there's usually a good reason for this? Whenever this happened to me, the problem was my ignorance.
I think there is a reason why people do that: trying to steer -those they consider- newbies away from patterns they consider bad, but at the same time this second-guessing can be annoying when you know what you want to do (especially when the original question isn't actually answered yet it comes up in search engine results...).
I can't say if it is a good reason in general, perhaps it is, but it certainly is something i personally find annoying. I think answers should provide an answer to the question asked and then, after that answer was given, they could also give pointers for whatever they consider a better approach and why - this is important, IMO, for a public forum where people of all backgrounds and goals can read the same stuff.
But either way, LLMs IMO should do/provide what they are asked without trying to second guess the user (or at least, there should be LLMs that act like that).
FWIW i haven't used Claude or any other cloud-based LLM, only what i can run on my PC, so it could be that Claude is smart enough to follow the user's instructions, keep the equivalent of a mental state of what the user seems to want to do and only push back when it really makes sense whereas a small local LLM is too stupid to judge all that and Qwen3.6 errs on the side of being annoyingly cautious while Devstral Small 2 errs on the side of trusting the user being really okay with blowing their toes off :-P. As i wrote in my original reply, this is my personal preference and i prefer the LLM to just do what i ask.
GP most likely meant it was a solved problem from the user's perspective. Autotools might be "horrifying" for the programmer (IMO that is an exaggeration, AT isn't that bad, with the potential exception of libtool) but it provides by far the best UX for a user (that knows how to use a command line and what building from code is, not your average grandma :-P) compared to everything else out there.
However one important aspect here is that there is no reason for Autotools to be what provides the `./configure && make && make install` UX - the GNU standards (not sure the exact name) describe the UX itself without mentioning anything about Autotools so any other approach to implementing it would be just as valid. However in practice whenever you find a configure script in the wild it is either Autotools or a hand-made one (that more often that not misses some of the GNU standard stuff).
Autotools can be daunting if you plan to write code that’s portable to Ultrix, IRIX, Apollo’s UNIX whose name I forgot, NonStop, UNICOS, OpenVMS, z/OS, macOS, and modern Linux.
Nowadays we don’t bother supporting dozens of platforms. Even Windows is something we can push aside and suggest WSL if you really need to run it under Windows.
And I even try to make sure my code runs correctly on z/OS (which IS a UNIX).
Amusingly, Free Vision (the Free Pascal version of Turbo Vision) is based on a manual translation of the C++ version because that was released on public domain at some point and someone ported it back from C++ to Object/Free Pascal.
Interesting. If I remember correctly the source code was available (need to check my old disks), however most likely the licence would forbid that anyway.
IIRC Borland released the C++ version specifically as PD later on their FTP server, it isn't based on the version from Turbo C++ physical releases. The history is (very briefly) mentioned in the Free Vision wiki page at the FPC wiki[0] (note that the wiki needs cleanup, e.g. it mentions 64bit clean support as a todo item but FV has been 64bit clean for a very long time now). It also mentions that somewhere between the C++ version and the Pascal conversion, TV/FV was converted to use graphics instead of text mode and it was ported back to text mode -- considering all the conversions, i'm surprised the API remained largely the same so that even now the best way to learn Free Vision is to read Turbo Vision docs/tutorials/books :-P.
Free Vision, included with Free Pascal is basically that. The text mode IDE[0] uses Free Vision.
The main issue is that Free Vision (and Turbo Vision) uses the original "object" types introduced in Turbo Pascal 5.5 instead of "class" types introduced in Delphi which make a lot of things easier (e.g. the "class" RTTI allows for enough reflection to implement automatic serialization of objects, but "object" types do not have that and Free/Turbo Vision require manual serialization with registration of the VMT pointer -accessed via a fixed offset in object pointers- as a means to distinguish at runtime between different types). Free Pascal adds a few of the niceties of "class" types to "object" types (like private/protected/public sections -TP objects are all public- and properties) but Free Vision doesn't use those as it implements the original Turbo Vision API.
> They should be separated kernel modules in their own tree.
The main issue with this is that by being on a separate tree they do not benefit from the API breakage updates in the kernel. After all the main benefit that kernel devs mentioned over the years for keeping drivers in the kernel instead of separate trees is that the code gets updated whenever internal APIs change.
FWIW this is how most projects work anyway. And IMO Subversion is still the best VCS when you have a lot of large binaries (the various extensions to Git like git-lfs are just hacks that graft a separate half-baked version control system to it and add further complexity to an already annoyingly complex system). I remember working at a gamedev company in early 2010s and out of curiosity i tried to put everything in the 250GB perforce workspace in a git repository only for git to choke and die before it managed to do anything. In comparison, ~5-6 years earlier i worked briefly at a game porting studio where every single game they had ported (which i'm almost certain went all the way back to the 90s), including all data and source (and these were AAA games, not tiny indie games), were into a single Subversion server.
Unfortunately Subversion lost the VCS fashion wars and nowadays it barely seems to have any development. I still use it for a few projects where i do have a lot of binary stuff, but most new things are in Git. I also have a bunch of stuff in Fossil (which also did handle binary files better than Git when i tested it years ago, though not as good as Subversion or P4) but nowadays i convert them to Git when i need to share because, well, pretty much everyone expects Git (and projects such as Codeberg and Forgejo make sharing and self-hosting easier).
Ironically the "fundamental issue" mentioned above was solved not too long ago with Subversion as nowadays you can have multiple "changelists" and each changelist is a full (hidden) SVN repository by itself, allowing you to do commits (as "shelving") and such locally and then push to remote when you're done. AFAIK changelists can also coexist (unlike Git where you can only work at one branch at each time). Unfortunately since Subversion is basically barely held together, only the command-line UI provides that functionality (at least in FLOSS clients) and even TortoiseSvn didn't seem to support it last time i checked.
reply