Maybe the only solution to GPTisms is infinite context. If I'm talking to my coworker every day I would consciously recognize when I already used a metaphor recently and switch it up. However if my memory got reset every hour, I certainly might tell the same story or use the same metaphor over and over.
> However if my memory got reset every hour, I certainly might tell the same story or use the same metaphor over and over.
All people repeat the same stories and phraseology to some extent, and some people are as bad or worse than LLM chat bots in their predictability. I wonder if the latter have weak long-term memory on the scale of months to years, even if they remember things well from decades ago.
Honestly I think there is more to it - even with infinite context, the LLM needs some kind of intelligence to know what is noise and what is not, you resort to "thinking" - making it create garbage it then feeds to itself.
Learning a language is a big complex task, but it is far from real intelligence.
Lock in is pretty easy these days. Just a dummy example, Claude models are trained on their `str_replace_based_edit_tool` edit tool[1] which is very different from OpenAI's `apply_patch` tool[2].
Also Discord - tons of people use Discord as a social network and keep up with friends. I must have 5 friend groups that have their own Discords with some overlap.
So did you disclose this responsibly? Posting about it publicly first is asking for that sensitive data to be leaked. Might as well hack and repost that PII yourself.
This is not a data leakage.
They deliberately included 999 of their customers' email addresses in publicly accessible JavaScript code in order to test certain features on them.
Certainly that wasn't intentional to broadcast to the public? Sounds like a textbook data leak.
> A data leak is the unauthorized, often unintentional exposure of sensitive, confidential, or personal information to an external party, usually resulting from weak infrastructure, human error, or system errors.
Consider medical device software. Often embedded C code, needs to be rigorously documented and tested, has longer development cycles, and certainly no attitudes of "bugs are fine, ship it and we'll patch later."
Yes. High value work where cost (mostly) doesn't matter. For example, if I need to look over a legal doc for possible mistakes (part of a workflow i have), it doesn't matter (in my case) whether it costs $0.01 or $10.00, since it's a somewhat infrequent event. So i'll pay $9.99 more, even if the model is only slightly better.
I'm surprised I never heard people talking about using -Pro variants, even though their rates ($125-175/M?) aren't drastically larger than old Opus ($75/M), which people seemed to use
It does.. and I've never heard anyone say it that way (and I appreciate that you chose the only dictionary that gave anything close to your argument).. but that's still nothing like "ballot".
Honest answer is that it isn't done running yet. It takes some human bandwidth and time to run, so results weren't ready by this morning. We don't know what the score will be, but will probably go up on the leaderboard sometime soon. I personally don't put a lot of stock in the ARC-AGI evals, as it's not relevant to most work that people do, but should still be interesting to see as a measure of reasoning ability.
reply