Unlikely. If you are talking about adding headers, or encryption, then yeah the compression might give worse result, it's due to more input data and/or encryption increasing the entropy of the signal. Otherwise, transparent encoding should not affect any decent compression algorithms, where frequency and entropy are leveraged.
Yeah this place has been depressing lately. The hope is that AI could be used to automate the parts of our lives that bring us no joy or growth and help us become fully actualised human beings, but instead it seems like it's just used as a tool to boost profits while making the world a worse place.
It's the denigration of any and all intellectual pursuits that gets me. It's the myopic lead the blind, in a race to empty their brain fastest before the singularity can rupture them into the mainframe heaven.
Their irl counterparts at the university make me think it must be envy, the same as with AI art: they were never good programmers but have always envied the their prestige; and using this new wonderful machine, they can now live out their fantasy at the expense of others. For others it's just nihilism: why not cheat through your entire higher ed if it's now entirely possible?
But many AI-boosters here on HN were once respected programmers, so what else can it be? Fatigue setting in with age, exacerbated by too many levels of indirection in modern software, AI becoming a crutch to avoid noticing you're slowing down?
It can be a useful tool to do all that, that's what I use it for. Unfortunately a lot of AI boosters have the sfba move fast and break things mentality and that leads to all sorts of slop being pushed.
I think broadly speaking, we need less software that's higher quality. Commercially at least, LLM's seem to be creating more lower quality software. Less software but much higher quality, and then let the gaps be filled in by houseplant programming. Instead we get half-baked vibe coded Cloudflare-isk slop being promoted, or CEO's of saas-slop providers salivating at the chance to fire half of their workforce.
I want to see more houseplants being posted here, LLM generated or not. At least they would tend to have more care and love put into them.
As someone said, "Machines were supposed to rid us of tedious work. Instead they write poetry and create art, and we fill captchas to prove to them that we are human"
I prefer a quote from Dune - "Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
Yeah, something like "I want AI to do my laundry and clean my house so I have more time to write and create art. Instead the AI writes and generates art so I have more time to clean my house and do laundry."
Is it just me or this article has been written or at least heavily processed with LLM? My AI slop radar triggered immediately (overly verbose, fluff, bland). Don’t get me wrong, it has valuable information but that style smells LLM from a distance.
Same here. More than 5 years with fish and it’s been like 5 times when not-POSIX was an “issue”, which I’ve been solving by temporarily entering bash and rerunning the command there.
How do you know “it has no memory leaks, crashes, ANRs, no performance problems, no network latency bugs or anything” if you built it just yesterday? Isn’t it a bit too early for claims like this? I get it’s easy to bring ideas to life but aren’t we overly optimistic?
Part of the "one day" development time was exhaustively testing it. Since the tool's scope is so small, getting good test coverage was pretty easy. Of course, I'm not guaranteeing through formal verification methods that the code is bug free. I did find bugs, but they were all areas that were poorly specified by me in the requirements.
Here's a copy of my Mastodon post [1] from Oct 2025:
---
I had a job interview yesterday, which happened via Google Meet.
Even though I use my desktop Linux workstation and Firefox 99% of the time for everything, my first instinct was to do this interview on a MacBook and Chrome, to avoid surprises and not look unprofessional if something doesn't work, which has happened in the past. Last year, when I was asked to share the screen during a daily, I had to say "um, I'm sorry, Zoom and desktop sharing don't work on my system."
But I thought I'd first do a test on my workstation, just to see if maybe I shouldn't be concerned anymore. I was sceptical.
The ideal scenario was that on my standard GNOME 48 / Wayland / PipeWire desktop I'd be able to use Firefox for this call, and AirPods, a Logitech webcam, and desktop sharing (5K ultrawide scaled at 125%) would just work with no tweaks whatsoever.
And it did!
I've been using Linux on the desktop for over 20 years (on and off, but mostly on) and I know how to hold my Linux systems, but the situation with Bluetooth audio and desktop sharing in previous years has been... spotty. I was less worried about AirPods — I switched to PipeWire ~3 years ago and so I know Linux audio has been rock-solid and pretty much solved already. But desktop sharing used to be hit-or-miss, highly dependent on whether you used X11 or Wayland, further complicated by the use of Flatpaks.
Since my test went well, I did the interview on the desktop machine. It went smoothly, with no surprises.
Therefore, I announce 2025 as the Year of the Linux desktop :)