You think it's OK to waste police time with slop diagrams?
As for suppliers: I don't think it's OK to waste their time with a no-human-in-the-loop AI order which the AI later tells them to "emergency" change or cancel.
Small business owners do not have professionals dealing with bureaucracy. I'm pretty sure a lot of plans submitted to the police require a few turns because they didn't follow the submitting rules.
And most suppliers soon will have have no human in the loop either. Suppliers deal with unreasonable people all the time, I bet they don't even flinch when the see EMERGENCY in the subject.
This is the "Waymo ran over a cat" thing - humans do these things 100 times more.
> This is the "Waymo ran over a cat" thing - humans do these things 100 times more.
Humans do not do this 100 times more, or even at the same rate.
If you behave this sloppily in the real world you will find yourself getting billed heavily for change order requests, or being dismissed as a client, or having your applications dropped or back-burnered for wasting people's time.
Indeed, the fundamental equivalence mismatch is that in the first scenario a considerable time and effort investment on both sides was required to facilitate the interaction. In the latter, only one side is required to put forth the time and effort investment
What a weird take. The government permitting staff and police are not private industry suppliers who can drop bad clients or bill them for wasted time.
You can spam a private vendor all you want with EMERGENCY change orders, but expect your bill to grow for the privilege. I know some contractors who would love this client because EMERGENCY change orders are expensive under their rate schedule.
There is more compute available than bandwidth when computing LLMs.
It's like branch prediction - the CPU predicts what branch you'll take and starts executing it. Later you find out exactly what branch you took. If the prediction was correct, the speculative executed code is kept. If the prediction was wrong, it's thrown away, the pipeline is flushed, and the execution resumes from the branch point.
The same with this thing: 3 tokens, A-B-C were "predicted", you start computing ALL them 3 at the same time, hoping that the prediction checks out. And because of the mathematical structure of the transformer, it costs you almost the same to compute 3 tokens at a time or just one - you are limited by bandwidth, not compute. But CRITICALLY, each token depends on all the previous ones, so if you predicted wrongly one of the tokens, you need to discard all tokens predicted after (flush the pipeline). This is why a prediction is required and why you can't always compute 3 tokens simultaneously - the serial dependency between consecutive tokens. If you were to start computing 3 tokens simultaneously without a prediction, for token C you need to assume some exact values for tokens A and B, but those were not computed yet! But if they were speculatively predicted you can start and hope the prediction was correct.
This is probably why MCP "code mode" (generating code once to call the MCP going forward) hasn't caught on yet... no need until the financial costs reflect reality.
Yeah? It's also a belief that apples fall when you drop them. Knowledge is simply a justified, true belief. This is epistemology 101. You're not saying anything interesting.
Why is nobody building a paid for browser with built in search engine and LLM assistant? Should probably make it open source for transparency. And before anyone says you would build/compile it yourself if it was open source, those ppl are already running their self compiled tools and are not the target market.
(I miss Arc, such a shame it only gets security/chromium updates now ...)
And I think Codex's desktop client has a built-in browser now? At least I've seen someone using something like that. Nevermind Atlas is a thing now too. https://openai.com/index/introducing-chatgpt-atlas/
I don't think anybody implied or thought of Android in this subthread, just because it nominally runs the Linux kernel. I for sure understood it as just "Linux = whole OS-level distros of the laptop/PC bound type" not "anything with a Linux kernel even if it's a proprietary mobile phone OS".
I definitely feel less a product on macOS than anything Google-orientated. I don't know where Windows fits into that exactly, given "paying for Windows" is not really how it's even seen, given major updates were 'free' (with extra ads).
Windows already has a secure kernel credential store, they could move the Edge password store there with a bit of effort, minimize the splash damage when you retrieve a single password to send over HTTP from the regular user space.
> Credential Guard prevents credential theft attacks by protecting NTLM password hashes, Kerberos Ticket Granting Tickets (TGTs), and credentials stored by applications as domain credentials.
> Credential Guard uses Virtualization-based security (VBS) to isolate secrets so that only privileged system software can access them.
Windows already has a secure kernel credential store, they could move the Edge password store there with a bit of effort, minimize the splash damage when you retrieve a single password to send over HTTP from the regular user space.
> Credential Guard prevents credential theft attacks by protecting NTLM password hashes, Kerberos Ticket Granting Tickets (TGTs), and credentials stored by applications as domain credentials.
> Credential Guard uses Virtualization-based security (VBS) to isolate secrets so that only privileged system software can access them.
This only works if credential guard has implemented a way to build a subsequent token/value from that secret. For things like basic auth the secret would need to eventually hit the userland process that needs it in some shape or form to then embed it in the HTTP payload which is plaintext.
reply