Anthropomorphizing is likely a mistake, but Daniel Dennett’s idea that the most straightforward (possibly only practical) way to create the external appearance of consciousness is a real internal consciousness does float around in my thoughts.
I haven’t yet seen any convincing appearance of one in an LLM, but I think if skeptical people don’t keep an eye out for the signs, we may be the last to see it.
He also wrote about the idea of the intentional stance: even if you’re quite sure these systems don’t have real conscious intent, viewing them as if they did may give you access to the best part of your own reasoning to understand them.
I totally agree to your point, and want to mention that the reverse is *also* important. Using just "intention", but these apply to emotions, etc
A lot of our interaction with AI is under an intention. That's what directs the interaction, and it's interpreted according to its alignment to the intention.
Then it's important to remember that our current (publicly known) implementation of AI does not have an explicit intention mechanism. An appearance of intention can emerge out of the statistical choices, and the usual alignment creates the association of the behavior with intention, not much different from how we learn to imagine existence of a "force" that pulls things down well before we learn physics and formalize that imagination in one of the several ways.
This appearance helps reduce the cognitive load when interpreting interactions, but can be misleading as well, and I've seen people attribute intention to AI output in some situations where simple presence of some information confused the LLM into a path. Can't share the exact examples (from work), but imagine that presence of an Italian food in a story leads the LLM to assume this happens in Italy, while there are important signs for a different place. The LLM does not automatically explore both possibilities, unless asked. It chooses one (Italy in this case), and moves on. A user no familiar with "Attention" interprets based on non-existent intentions on the LLM.
I found it useful to just tell them: the LLM does not have an intention. It just throws dice, but the system is made in a way that these dice throws are likely to generate useful output.
> but Daniel Dennett’s idea that the most straightforward (possibly only practical) way to create the external appearance of consciousness is a real internal consciousness does float around in my thoughts.
I would say LLMs are very strong evidence against this hypothesis.
I don't really understand the argument for these things being conscious. There's no loop or feedback cycle to it. If it's not handling a request it's inert.
> but Daniel Dennett’s idea that the most straightforward (possibly only practical) way to create the external appearance of consciousness is a real internal consciousness does float around in my thoughts.
Pretty sure Daniel Dennett has been adamantly opposed to any sort of theater in the mind when it comes to consciousness. He views it as biologically functional. For him, to make a conscious robot, you need to reproduce the functionality of humans and animals that are conscious, not just an appearance, such as outputting text. Although he's also suggested that consciousness might be a trick of language. In which case ... that might be an older view though. He used to argue that dreams were "seeming to come to remember" upon awakening, because again he his view is to reject any sort of homunculus inside the head.
You might be mixing up some of Dennett and David Chalmer's views. David Chalmers is a proponent of the hard problem, but he's fine with a kind of psycho-physical-functional connection for consciousness. Any informationally rich process might be conscious in some manner.
The working programmer might be interested in the series on ropes on the Xi Editor website[1] as a practical application, as it motivates the concept as it goes. (Alternatively, if you’ve taken an algorithms class you have probably encountered the idea of computing things over an interval of an array by storing them in for each node of a tree that flattens to that array, such as a search tree or interval tree.)
I was also dissatisfied with existing task tracking apps, and built my own:
t-do.com
There are still many rough edges, but it’s extremely useful. One of the best features that a text file has that very few apps support is unlimited sub-task nesting, and that’s a core feature of T-Do.
I disagree. There are many operators that you’ll never use but if you memorize
(^.), (.~), and (%~), you’re pretty much set for a lot of real-world software development.
Per Kmett’s original talk/video on the subject, I can confirm my brain shifted pretty quickly to look at them like OOP field accessors. And for the three above, the mnemonics are effective:
“^.” is like an upside down “v” for view.
“.~” looks like a backwards “s” for setters.
“~%” has an tilde so it’s a type of setter and “%” has a circle over a circle, so it’s over.
I’ll also add that my experience in recent versions of PureScript things get even nicer: visible type application lets you define record accessors on the fly like:
foo ^. ln@“bar” <<< ln@“baz”
“.” Is unfortunately a restricted character and is not the composition operator like Haskell, but I alias “<<<“ with “..”
The pretty obvious question with the above is: why don’t you just write “foo.bar.baz”. In my case I use a framework that uses passed lenses for IoC, but I think “%~” is always nicer and less repetitive than the built-in alternative.
There's actually a bit of a controversy over what the original film actually looked like. There are 35mm scans out there with no colour grading at all (e.g. https://youtube.com/watch?v=Ow1KDYc9XsE). Some people claim it never had the green and it was added later for the DVDs. The sequels definitely had it but turned up to 11. Trouble is nobody can really remember what they saw in the cinema in 1999 and there's a bit of Mandela effect going on thanks to retroactive grading and sequels.
I saw an original 35mm print at Cinespia somewhat recently and it's definitely not the green tint. That was added later.
The original look was a bleach bypass film process which was very colourful with blown highlights. There should be quite a bit of info on this online I would think.
Coolest part of this whole experience is that the Cinespia venue is at the Hollywood Forever in LA - it's a giant old cemetery with a huge lawn and massive screen. During the final scenes when Neo is about to fight Agent Smith in the rain it actually starts raining in real life (obviously rare for Los Angeles). People started leaving but I thought it was pretty damn amazing.
Yep, KV is broken too. Any worker that depends on KV is throwing exceptions. I was able to get into the dash, but it's very slow. Error rates started to go up significantly around 18:00 UTC.
It's not much of a reach to go from "discussion about impact on human-verification dialogs" to 'discussion about human-verification dialog policy". This isn't an incident-management channel, it's a discussion forum - tangents are fine!
I complained in the apnews.com thread, because the apnews.com verification, which is annoying by itself, did not work at all this time. That is hardly unrelated.
I really appreciate the integrated fingerprint reader in these cases. I usually run with my laptop screen closed (with external monitor) but open it specifically to authenticate in system dialogs.
Apple also sells a wireless keyboard with touchid integrated. Works great. Especially if you also set pam to use touchid for sudo. They’re not cheap though.
I haven’t yet seen any convincing appearance of one in an LLM, but I think if skeptical people don’t keep an eye out for the signs, we may be the last to see it.
He also wrote about the idea of the intentional stance: even if you’re quite sure these systems don’t have real conscious intent, viewing them as if they did may give you access to the best part of your own reasoning to understand them.
reply