We are really reaching a point where the internet is becoming so unbearable.
People that don't write their comments often want to farm engagment or just wanna sound smart. Either way, the thirst is disgusting to me.
Me too, I have the bad feeling autocomplete will be sunsetted sooner or later, it clearly isn't the path they're getting into. Also it started to get worse lately, it tries too hard to predict, it wasn't like that some time ago, hopefully you know what I'm saying
So funny , I remember their talk about re-imagining their editor for the future of agents. They end up copying codex gui lol.
These AI companies are running out of ideas, and are desperate.
I can't imagine investing in companies that are 3 month behind open source alternatives, and their target audience being the most experimental kind there is.
I feel that too, every technology has its limits.
I use AI daily. But I can’t see the “intelligence“.
All I see is fine tuning and bigger datasets.
Yesterday I asked claude to fix the color issues of graph. It failed miserably.
Opus 4.6 wasn’t able to figure out why the text was grey. It made something up, instead of realizing the problem was simple, oklch wrapped inside a hsl color. hsl(oklch(…))
I easily figured this out by just looking at the css and adding some logs to js.
This is not intelligence. This is a tool that’s smart. Not sentient. AGI won’t be achieved by scaling alone.
and Zuck isn’t sued for downloading either, he is sued for reproduction by the AI not being derivative enough, but so far all branches of government support that
FB and so are CIA fronts and they can do anything they please. Until they hit against Disney and lobbying giants and if a CIA idiot tries to sue/bribe/blackmail them they can order Hollywood to rot their images into pieces with all the wars they promoted in Middle East and Latin America just to fill the wallets of CEO's. That among some social critique movie on FB about getting illegal user data all over the world to deny insurances and whatnot. And OFC with a clear mention of the Epstein case with related people, just in case the Americans forgot about it.
Then the US industry and military complex would collapse in months with brainwashed kids running away from the army. Not to mention to the Call of Duty franchise and the like. It would be the end of Boeing and several more, of course. To hell to profit driven wars for nothing.
Ah, yes, AIPAC lobbies and the like. Good luck taming right wing wackos hating the MAGA cult more than the 'woke' people themselves. These will be the first ones against you after sinking the US image for decades, even more than the illegal Iraq war with no WMD's and the Bush/Cheney mafia.
The outcome of this? proper and serious engineering a la Airbus. Instant profit-driven MBA and war sickos being kicked out from the spot. OFC the AI snakeoil sellers except for the classical AI/NN against concrete cases (image detection and the like), these will survive fine, even better because these kind of jobs are highly specific and they are not statistical text parrots. They can provide granted results unlike LLM's prone to degrade because the human based content feeding needs to be continuous, while for tumour detection a big enough sample can cover a 99% of the cases.
R&D on electric vehicles/energy and nuclear power like nowhere else. And, for sure, the EV equivalent of a Ford T for Americans. A cheap and reliable one, good enough for the common Joe/Mary without being a luxury item. A new Golden Age would rise, for sure.
But the oil mafia will try to fight them like crazy.
Do you mean something specific, because that sounds like a criticism but with some blanks that need to be filled in.
If you just mean they come across as annoyed by AI, that's true, but that's also way too wide of a category to infer basically anything else about them.
You still need to give it precise context and instructions when dealing with things that are not web apps or some other software cliche.
The reasoning is great in opus, unbeatable at the moment.
I understand what you mean, it becomes disappointing on more niche or specific work.
It’s honestly a good thing to see these models are not really intelligent yet.
I still don't trust any AI enough to generate or edit code, except for some throwaway experiments, because every time I tried it's been inefficient or too verbose or just plain wrong.
I use it for reviewing existing code, specifically for a components-based framework for Godot/GDScript at [0]. You can view the AGENTS.md and see that it's a relatively simple enough project: Just for 2D games and fairly modular so the AI can look at each file/class individually and have to cross-reference maybe 1-3 dependencies/dependents at most at any time during a single pass.
I've been using Codex, and it's helped me catch a lot of bugs that would have taken a long time on my own to even notice at all. Most of my productivity and the commits from the past couple months are thanks to that.
Claude on the other hand, oh man… It just wastes my time. It's had way more gaffes than Codex, on the exact same code and prompts.
I had a similar experience and the answer appears to be learning how to use a specific model for a specific task using a specific harness (model X task X harness). Another, and somewhat related, lesson learned is understanding how to work with a given model and not against it.
I still get really mad at AI sometimes and I am not sure whether I could use AI for coding full time.
But all these new protocols want to do stuff at the expense of trustlesssness.
reply