Hacker Newsnew | past | comments | ask | show | jobs | submit | ndriscoll's commentslogin

Not really? I've worked with people who were super productive with high quality work, and my reaction was to... gravitate toward working more with them. Some people are status driven. Some are not. Some are apparently pathologically status driven such that they'll compete with a literal child.

In any case refusing to nurture such a child (even in effectively passive ways like letting them quietly do something more advanced with no specific instruction) and not being reprimanded for it would reveal that the actual purpose of their position is daycare worker, which should be a bigger strike to the ego.


That’s what all people say. Everyone who is status driven will not admit or even realize they are status driven. But the fact of the matter is… it is human nature to be status driven. Everyone recognizes status symbols and possesses such a drive within them. It is also clinically tied with serotonin levels and observed in cross species behavior. To say you have no drive for status is an either a lie or delusional. The evidence is so ingrained in science.

Now. That being said, the drive can be suppressed. But suppressing the drive doesn’t mean it doesn’t exist and that you don’t feel it. Also many people feel the drive at different levels of intensity that much is true.

Anecdotally your response to me indicates to me that you have not suppressed status seeking drives completely. The key hints are you’re referring to how you’re drawn to people who do high quality work. That is orthogonal to status seeking. Your status and identity is tied to a certain type of work you do and you take pride in. Have you worked with anyone who was so powerful that their work invalidated, crushed and basically humiliated anything you did. And let’s say this person is not malicious. He’s just so much better than you that your work and identity is inconsequential and eclipsed by his work.

If you said that you wouldn’t feel anything in the face of that then I would say that you truly do not seek status. I would also say you’re not human.

That being said a teacher holds his identity as someone who is better than children. He needs to be better than children in order to transfer his betterment (aka knowledge) to children. His role in society and identity rests on that foundation. If children are better then him and know more than him then that is inadvertently an attack on his identity. His reaction is natural and expected. It’s not that he has anything against the child, it’s self protection mechanisms to protect his identity via deluding himself. Very typical.

You see much of the same stuff with LLMs and programmers. A huge portion of HN was in denial for the longest time about the capabilities of LLMs calling these things stochastic parrots and thinking it’s impossible for the AI to take over. HN was just completely wrong about that and they were also wrong about driverless cars. The reason why they were so wrong is not because they’re making a logical and rational prediction… no they are choosing the prediction that most aligns with protecting their identity and skill set as programmers which is in the process of being replaced by agentic ai.


Again, I think you're entirely off base here. Maybe you are status driven enough that you can't wrap your head around someone who isn't, but I'm really just not interested in it. I want to give my family a comfortable life and spend time with them. That's it.

To color that a little, I've literally told the last 4 managers I've had very explicitly that I'm not at all interested in career advancement. When I was asked to lead my current team, I said "I've done it in the past and can if you want, but check with A and B first to see if they want to". I literally do not care about it. Work is a means to provide, and it does well enough that I don't need to chase it anymore. Actually the marginal pay for the increased responsibility kind of doesn't make it worth it, but like I said I'll do it if they need that. And so my focus is generally thinking about "how do I get one of my team members in a place where they can replace me?"

If we're talking about who's more human, I'd put forward that caring about who's best seems less humanizing than seeking to spend time with people you care about, remembering how lucky you are to have that time, and ignoring outside noise.

Especially when it comes to teaching, if your identity is "better than child" instead of "person who helps children reach their potential" I'm not sure what to say. Sounds like a narcissist.

On LLMs, I found them to be useless but interesting right up until December, at which point I started a hard push for my team to adopt it (and get excited about it). I'm very explicit that my mental framing with them is "how do I get it to do my job". I'm well aware that "programmer" per se is not going to be a job in the future. That much seemed obvious as far back as the original chatgpt release. That's fine, and just means we have to ask ourselves what else needs doing. If we ever get to the point where the answer is "nothing" then I guess we're all doing pretty well.


Ironically builtin smartphone calculators are really bad, and one of the best ones you can download might be Graph 89 (a TI-89 emulator).

Rant/Aside: Smartphones (or at least Android) are just generally really bad at being... smart, especially out of the box. No dictionary? No thesaurus? To say nothing of built-in encyclopedia (e.g. Wikipedia). Calculator worse than the $1 scientific ones? It's astounding how obvious it is that they're meant to dumb people down and just sell you crap when you look at the complete absence of basic functionality anyone from 50+ years ago might expect them to have.


My linear algebra class used F_2 as our field probably half the time that it was specified. Realistically almost any course probably doesn't need calculators at all (or they could at least be kept for homework). If you're not teaching arithmetic, you keep the arithmetic simple. If you're not teaching algebra, you keep the algebra simple. etc.

Your comment's framing makes no sense to me. My wife pushed for me to go into engineering instead of academia so she could stay home and we could be comfortable. We're married. We have kids. The entire point is we're not independent. That's what married literally means. Unioned. Joined. There is no her and me. There is us.

Why would you need or even want to be independent? Why would you plan to form a family while keeping your options open/having one foot out the door?


Plenty of women (and men) end up in relationships they hate, and if they have no independence they are pretty much fucked. They have no way to escape. Women having options makes a huge difference.

What you are describing is pretty much ideal for a lot of people, but it's not what everybody gets.


How does this happen though? It's not like you wake up one day, look around and see you've started life in the middle, you're married and have kids, and you hate your spouse. Did your spouse have a stroke and undergo some massive personality change or something?

Assuming you want a family, your very top priority when evaluating someone for dating from the very beginning should be whether that person would make a good spouse and help you to form that family. Otherwise what are you even doing? Someone who can't commit is its own red flag for that purpose. If you have kids, that's it. You're in it. You need to be committed.

And having a job doesn't mean you're independent of your spouse anyway. If one of us died or we split, it'd be absolutely devastating to our family regardless of the money (e.g. if life insurance/social security covered everything). I would be hugely screwed trying to raise the kids without her, job notwithstanding.


I think the simple fact of the matter is that most people have absolutely no clue what they’re doing when it comes to relationships, and think their social media hot takes are indicative of what they ought to want.

This is on top of societal pressures. In more liberal parts of the US (and the world) it's accepted that you will take your time finding a partner, or even stay single if you want. In more conservative societies the expectation that you will marry young and start popping out kids is intense.

I think it goes both ways. I moved from a liberal to a conservative area. Maybe there are people shaming those who don't pop out kids, but more so I I've noticed it's that they're not shamed if they want to just let loose to their instincts and get impregnated as an 18-year-old and yield to their natural desires and interests. In a liberal city a 18 year old popping out a kid and is often viewed as a pariah.

I mean people do not naturally grow up wanting to stare at a desk/PC all day deciding to become a scientist or a doctor and study a bunch of shit that his almost no relation to what humans were adapted for for millions of years. Our evolutionary programming was to bang, have kids, and roam the jungle and grab the resources and satisfy our short brutish lives.

Now the fact that something is evolutionarily natural or historically normal doesn't mean it is good or right. But just letting loose on that particular natural instinct tends to be more accepted in conservative societies while in the city or liberal areas teenage (past age of consent) pregnancy is seen maybe more of something they will shame you for. You're supposed to do a pretty unnatural thing of staring at books until you're 22 or 26 and then stare at a computer screen so you can get a good job to pay a gazillion dollars for childcare delivered by minimum wage workers. You're supposed to take your time and maybe about the time your biological clock has run out, you pay $20,000 for IVF and you do a speedrun.

So which is a greater imposition of societal pressure? I won't claim conservative societies don't exhibit more social pressure than liberal ones. But on this point, it's not clear to me the conservative one is doing the greater of the pressuring.


> Why would you need or even want to be independent?

Because I would want my kids to be able to get out of an abusive partnership if they needed to. See the history of domestic abuse.

> Why would you plan to form a family while keeping your options open/having one foot out the door?

Everyone should have options open for basic sustenance. Death, abuse, job b loss, etc. As they say in engineering, two is one and one is none.


This is one of my life goals is to prepare my kids to troll their math teachers with the dual numbers and the claim that .999... is obviously 1-ε. Goal is to convince the teacher .999...≠1. Bonus points if they instead convince the teacher to doubt that complex numbers exist.

That would be both fun and correct.

It really comes down to what semantics we attach to "=" when one of the sides is an infinite series. The "equals to" sign that we have used prior to that mental exercise was for finite terms only, we had not had to deal with infinitely many terms before that leap in thought. So now we have to extend the notion in a way that is backward compatible.

A convenient one is it is equal to its limit if it exists.


> semantics we attach to "=" when one of the sides is an infinite series

I would say that the semantics are about what an infinite series itself is, not about the equal sign. Once we have the common analytic notion of convergence of an infinite series, then the equality makes sense. The issue is that an infinite series is not an actual sum, but, formally, it is a sequence (of the partial sums). As you say, we represent the limit of the sequence of the partial sums with the same notation and only in the case that we have absolute convergence, but that's basically because we use the same notation for two different things (the sequence of the partial sums, and the limit of that). If we know we refer to the limit, I don't think there is any semantic complication with the equal sign.


Maybe you're just used to your flavors of jank so you don't see it? Your goal is to get off windows, but I've had only Linux on my home computers for ~10 years and it's been working great the whole time. Literally nothing I can think of to complain about.

It's been a few years since I had to use OSX for work, but last I used it, you couldn't maximize windows without a 1+ second animation playing when you cmd+tabbed, which made maximizing completely useless. Docker was also super slow. There's no package manager and the usual recommendation (brew) for a third party one is trash that will update programs you didn't ask it to when you're installing something else. IIRC external monitors are completely unusable from blurry text.

I used a windows laptop recently for a year or so for work. Absolute jank. Sleep was just broken. Like wouldn't sleep/spin down the fan with the lid closed unless I unplugged it. Often completely frozen requiring hard reboots when opening the lid. Leaving it "sleeping" for an extended period would still heavily drain the battery. WSL barely works. For some reason I have to care whether things are in my Windows or Linux home directory. Wrong one and git commands take seconds. I'd get environment mismatches where the terminal in VSCode would fail to run commands that run in a normal CLI, etc. DNS would break inside WSL because it wouldn't propagate config from DHCP. UI is just slow to respond to anything. If you start typing in the start menu search (e.g. "shut down" or "power off"), the menu replaces itself with a different one, and you can't find the power options until you close and reopen the menu.


>Maybe you're just used to your flavors of jank so you don't see it?

That's a throwaway line, everyone is used to their own flavors of jank, even on Linux.

>Your goal is to get off windows, but I've had only Linux on my home computers for ~10 years and it's been working great the whole time, Literally nothing I can think of to complain about.

I think you're trying to read too much into a comment and trying to poke holes...I don't have any Windows, if that wasn't clear. Since we're flexing about experience...I've been doing this since RH 7.2 came in the back of a book 20+ years ago and deploying production Linux services for about the same at a large scale but whatever.

Everything has its flavor of jank, and for most people, Linux is a flavor of jank just barely too far over the horizon still. But, once again, far better than what it ever was 20 years ago, and has the potential to pass Windows at least here soon. But, one of the biggest hurdles especially for adoption is, well, the community, 90% of which think they're one step away from being Linus simply because they installed Arch following a tutorial, and they treat new users the same way for no good reason as they tell the same new users "it's Easy!"

You must be one of the luckier Linux users I guess. I have heard of them, but I've had plenty of convos where once you actually dig into things it's usually not as truthful and playing to the crowd on an internet forum _about Linux_ for confirmation.

>It's been a few years since I had to use OSX for work, but last I used it, you couldn't maximize windows without a 1+ second animation playing when you cmd+tabbed

I use it every day for work, for heavy eng work. Let's be honest, yes there's an animation delay to some degree, but this is trafficking a bit in hyperbole here. GNOME has basically the same behavior for many aspects including switching workspaces by default...which can be turned off in both. The Cinnamon or KDE default experience is better in this regard.

>Docker was also super slow

Only issue I've had with Docker on a Mac with speed is when I'm trying to use some hefty x64 images on ARM macOS (I still have a last model i7 MBA for fun too), which is expected, same with VMs. I've run some pretty gnarly full stack apps, some that included Java backends that needed up to 8gb because reasons, without issue as long as I built an ARM image.

>There's no package manager and the usual recommendation (brew) for a third party one is trash that will update programs you didn't ask it to when you're installing something else.

It behaves roughly the same on macOS as it does on Linux, IME. If I'm not explicit on dnf/apt, I get more updates than just what I wanted too. But maybe I'm missing something. It's how I manage all my tooling on the work env and gives me very few issues save usually for only the occasional connection issue which is always attributed to work VPN nonsense.

>IIRC external monitors are completely unusable from blurry text.

Even on a Mac? The ecosystem is designed for professional graphics use, never had an issue there even back to CRT days heavily using all the Adobe suite versions, and even with non-Apple displays. Every Linux setup I've ever used, including this one is janky with external monitors, let alone dual. Even the "Easiest distro in the world" (Mint) according to most Linux nerds is problematic to say the least in trying to use the screen res/layout settings.

>I used a windows laptop recently for a year or so for work. Absolute jank. Sleep was just broken. Like wouldn't sleep/spin down the fan with the lid closed unless I unplugged it. Often completely frozen requiring hard reboots when opening the lid.

A - agreed, I don't work anywhere which requires Windows, because for all my devtooling, it's all tied into a macOS ecosystem, yes, with homebrew for now. Been that way for almost a decade now. Ideally, one should also do a lot in a build container for 1:1 matching so your CI jobs run the same env/toolset/versions. It's better for real dev work and way more stable in a way that won't require you to become a support headache for the company either.

B - what you are describing is a hardware issue and attributing it to Windows. I had the same issue on a B550 series desktop mobo, went to Linux, same exact behavior. This is not an OS issue.

>Leaving it "sleeping" for an extended period would still heavily drain the battery.

To my mind, non-mac laptops are garbage for battery life, everyone knows this, and yeah if it wasn't sleeping for real it's gonna eat up resources. This is more a hardware issue than anything, not the OS layer. Put Linux on it and I could almost guarantee you would have had similar issues, I've dealt with this like I said w/ the mobo above.

>WSL barely works. For some reason I have to care whether things are in my Windows or Linux home directory. Wrong one and git commands take seconds. I'd get environment mismatches where the terminal in VSCode would fail to run commands that run in a normal CLI, etc. DNS would break inside WSL because it wouldn't propagate config from DHCP. UI is just slow to respond to anything. If you start typing in the start menu search (e.g. "shut down" or "power off"), the menu replaces itself with a different one, and you can't find the power options until you close and reopen the menu.

Man, I have to wonder....was this not using latest/WSL2 and instead using WSL1? Because there _is_ a massive leap between the two. It's not ideal compared to native on Linux or even mac but still works quite well for many use cases. When the WSL2 upgrade came back when I was forced in a past env to use a Windows laptop, myself and 4 other Devs could run our full stack including Kafka locally without much issue on WSL2 other than producing heat on the laptop b/c of how many services we were running. (About 35 .NET Core microservices at the time, along with redis, Kafka, etc.). Yes, the home pathing was a tad annoying.

>If you start typing in the start menu search (e.g. "shut down" or "power off"), the menu replaces itself with a different one, and you can't find the power options until you close and reopen the menu.

Yeah every OS seems to have issues with their search/launcher tooling, but the Start Menu has been shit for a while now. I've had more issues on Windows than anything else re: manu defaults (once tweaked on like W10 it's fine), but then Linux, and then even macOS...before paring down Spotlight to only search certain things, which made it way better.

shrugs

I think this is one of the challenges of building good software, it's why Apple does what they do. Some experiences on one hardware set are somehow perfect, but they're rare, some are the exact opposite. But a lot comes down to what a user is willing to tolerate, too, and while someone might say it was "Easier on Linux" it's usually just that they're willing to tolerate more terminal madness and odd behaviors than others in their daily driver.


Suppose we still need humans to be writing code and caring about this stuff for the foreseeable future, so we need people to continue learning about the ways things can go wrong. For something like injection, you still ideally have a lint rule that says "don't concatenate things that look like SQL/HTML/etc. Use the correct macros for string interpolation". What does it actually teach for a reviewer to tell you that? You can ask the reviewer for more information, but you can ask your teammate anyway if you don't understand why the linter is mad. You can also ask the robot, who will patiently explain it to you even long after all of the knowledgeable humans have retired or died. The robot could even link to a prompt asking to explain it:

https://chatgpt.com/share/69f10515-8808-83ea-abe3-a758d3144c...

If people aren't learning more with AI, that's a meta skill they need to develop.

As for training the review muscles, why would you do that if you have a linter that rejects when you make the mistake? I don't expect reviewers to check whether you eschew nulls or uninitialized variables; I expect the compiler to do that, and I expect over time that more and more things will become tooling concerns (especially given that rigid tools with appropriate feedback are clearly a massive force multiplier for LLMs).


Two issues here. First, teams that decide to delegate security responsibilities to AI are more likely to do things fast and loose, in general, and thus be less likely to "ask the robot to patiently explain" problems until they understand the problems' root causes and update their mental models to prevent those problems.

Second, to use your example, the ChatGPT response you provided does a crappy job of explaining the root cause of problem: Namely, that every string is drawn from some underlying language that gives the string its meaning, and therefore when strings of different languages are combined, the result can cause a string drawn from one language to be interepreted as if it were drawn from another and, consequently, be given an unintended meaning.

So, if the idea is that smart teams can not only delegate the catching of problems but also the explanation of those problems to ChatGPT -- presumably because it is a better teacher than the senior engineers who actually understand the salient concepts -- I'd say AI ain't there yet.


> teams that decide to delegate security responsibilities to AI are more likely to do things fast and loose

Is that true? Is that also true of e.g. teams using type checkers to avoid nulls or exceptions? Or teams that use memory safe languages to avoid memory corruption? Or using a library that has an `unsafeStringToSql` API surface, and a linter to flag its use (where you're expected to use safe macros instead)? My experience is that better tools (or languages and library designs) scanning for issues lead to fewer defects and less playing fast and loose since the entire point of the tools is to ban these mistakes.

On education, it literally tells you that the top concern is SQL injection made possible by concatenating strings, and gives an example of an auth bypass: `name = "foo' OR 1=1 --"`. It also notes that this is not just a minor nitpick, but that actually the solution is fundamentally doing something completely different (query objects with bound parameters). If you don't understand what it means you can just ask:

> Elaborate on 1

> Walk through examples of what goes wrong and why, and how the solution avoids it

etc. The knowledge is all there; you just need to ask for it. It's an infinitely patient teacher with infinite available attention to give to you. You can keep asking follow-ups, ask it to check your understanding, etc. Or there are tons of materials about it on the web or in textbooks, and if you still don't understand, you can still ask a more senior engineer to explain what's wrong.


> Is that true [that teams that decide to delegate security responsibilities to AI are more likely to do things fast and loose in general]?

Yes. See: vibe coding. See also: The shockingly widespread hype for and acceptance of vibe coding across industries that ought to know better.

Do you deny that there is a correlation between AI use and not knowing what you are doing? Isn’t one of the big selling points of AI is that it lets “regular people” create “real world” projects that they could only dream about previously?

I am not saying that serious engineers don’t use AI or that when they use it, they do so foolishly. I’m only pointing out that AI has let a lot of people who don’t know what they’re doing crank out code without understanding how it works (or doesn’t).

> Is that also true of e.g. teams using type checkers to avoid nulls or exceptions? Or teams that use memory safe languages to avoid memory corruption?

No, it is not true of those teams. When people choose to use languages with statically checked types or with memory safety or the other examples you offered, they are rarely doing it because they have no idea how to write sound code. But when people turn to AI to crank out code they couldn’t write themselves (see: vibe coding), that’s what they are doing.

> On education, [ChatGPT] literally tells you that the top concern is SQL injection from essentially concatenating strings, and gives an example of an auth bypass: `name = "foo' OR 1=1 --"`. If you don't understand what that means you can just ask...

Again, that’s a crappy explanation of the real problem. It promotes no understanding of the underlying issue—that strings are drawn from languages that give them their meanings. And, unless you understand that it’s a crappy explanation that ignores the underlying issue—which a person being gaslit by the crappy explanation would not—what stimulus is going to provoke you to ask for a better explanation? How are you going to know that the crappy explanation is crappy and tell ChatGPT to take another direction?

> The knowledge is all there; you just need to ask for it. It's an infinitely patient teacher with infinite available attention to give to you.

Yeah, and if it steers you down a crappy path, such as in your sql-injection session with ChatGPT, it will be infinitely happy to keep leading you down that crappy path. Unless you know that it’s leading you down a crappy path, you won’t be able to tell it to stop and take another path. But if you are relying on the AI to tell you what’s good and what’s crappy, you won’t be able to tell which is which. You’ll be stuck on whatever path the AI first presents to you.

> Or there are tons of materials about it on the web or in textbooks, and if you still don't understand, you can still ask a more senior engineer to explain what's wrong.

And that’s equivalent to “don’t ask the AI, use a traditional resource,” right?


I'm not following the scenario here. The original discussion was around teams using these tools, not vibe coders chasing their dreams.

If you're a "regular person" vibe coder, you're not doing code reviews with a team anyway. You presumably had no teacher and no one to tell you your mistakes. So having a security bot is strictly an improvement.

If you're on a professional team, then you're presumably in the non-foolish group that already had standards, and is using it as a tool as with any of the other quality tools they use. And if they don't have standards and don't know this stuff already, well, the bot is again an improvement. It least it raises the issue for someone to ask what it means.

If you're a professional, I also assume you've heard of SQL injection (does it never come up in a CS degree?), so you don't really need more than a "this method is exposed to SQL injection" explanation. It's like saying "tail recursion is preferred because it compiles to a loop, so it's not prone to stack overflow". It assumes it doesn't need to elaborate further, but if you don't understand a term, you can just ask. Or look it up.

And yeah books or Wikipedia still exist even if you use an automated linter. You can go read about these things if you don't know them. I frequently tell my team members to go read about things. Actually I ended up in a digression about CSRF the other day (we work on low level networking, so generally not relevant), and I suggested the person I was talking to could go read about it if they're interested so as not to make them listen to me ramble.

Also I'm still unclear on why you think the explanation is crappy. It says the problem is you're making a query via simple string substitution, shows how you can abuse quotes if you do that (so concretely illustrates the problem), and says the reason the better solution is better is that it makes a structural object where you have a query with placeholders followed separately by parameters (so you can't misinterpret the query shape), which seems better than "strings are drawn from languages that give them their meanings"?


The root of this subthread was this claim that I made and you questioned:

> Teams that decide to delegate security responsibilities to AI are more likely to do things fast and loose in general.

Note the word delegate. I claimed that teams that delegate security responsibilities to AI are more likely to play fast and loose in general. That’s because delegating security to AI—not supplementing existing security practices with AI—is likely to be a good way to launch insecure garbage into the world. AI simply isn’t good enough to get security right on its own. Maybe someday it will be good enough, but like I wrote earlier, it ain’t there yet. And any team that plays fast and loose with security is likely to play fast and loose in general.

See any problems with that logic?

I only used vibe coding as an obvious example that shows there are lots of teams that delegate security responsibilities to AI. (Vibe coders are delegating almost everything to AI.)

> If you're a "regular person" vibe coder, you're not doing code reviews with a team anyway. You presumably had no teacher and no one to tell you your mistakes. So having a security bot is strictly an improvement.

How is it strictly an improvement? Before vibe coding, “regular people” couldn't launch insecure garbage upon an unsuspecting world—they couldn't launch anything. Do you believe that it’s "strictly better" that now everyone can launch insecure garbage courtesy of their AI minions? Do you think it’s “strictly better” that lots of users are having their data sucked into insecure apps and web sites that are destined to be hacked?

> Also I'm still unclear on why you think the explanation is crappy.

It’s crappy because it tells you how to use a tool (a custom SQL interpolator) without helping you understand the cause of the problem that the tool is trying to solve. You could read this ChatGPT explanation about avoiding SQL injection in Scala and not be any wiser about how to avoid that problem in other programming languages.

Worse, you would never learn from this explanation that the underlying cause of SQL injection is the same as for cross-site-scripting holes and a host of other logic and security problems in software. That’s because ChatGTP was trained on explanations of these problems scraped from the internet, and 99% of those explanations are superficial because the people who wrote them didn’t understand the underlying issues.

But if you deeply understand the following, you will never make this kind of mistake again in any programming language:

1. Every string is drawn from an underlying language and must conform to the syntax and semantics of that language.

2. To combine strings safely, you must ensure that they are all drawn from the same language and are combined according to that language’s syntax and semantics.

Therefore, as a programmer, you must (a) understand the language beneath each and every string, (b) combine strings only when you can prove that they have the same underlying language, and (c) combine strings only according to that underlying language’s syntax and semantics.

If you understand these things, you will know how to avoid all SQL injection and XSS holes and related problems in all programming languages. Things like escaping will make sense: it converts a string in one language into its equivalent string in another language. Further, you will know when you can safely delegate some of your responsibilities to tools such as parsers, type systems, custom SQL interpolators, and the like.

But you wouldn’t learn any of this from the ChatGPT explanation you received. Worse, you wouldn’t even think to ask for this deeper explanation because you would have no reason to suspect from ChatGPT’s explanation that this deeper explanation even existed.

In any case, I appreciate your willingness to continue this conversation. It’s been fun and educational and has forced me to articulate some of my ideas more clearly. Thanks!


But I delegate checks to tools all the time. e.g. I could spend my time checking whether locks are all used correctly in our code, or I could use libraries designed to force correctness[0]. An LLM isn't an ideal solution to linting, but if you're stuck with a language with a weak type system maybe that's all you can reasonably do.

The actual problem is that you're using strings at all. The SQL solution (that the scala macros do) is to use prepared statements and bound parameters, not to escape the string substitution. Basically, work in the domain, not in the serialized representation (strings). Likewise with XSS, you put the stuff into a Text node or whatever and work directly with the DOM so the structural interpretation has all already happened before the user data is examined.

But "work in the domain as much as possible" is a good idea for a whole bunch of reasons (as chatgpt said).

It did also several times indicate there was more to the story. It didn't just say "because that way is safer"; it said it

> Builds a structured query object, not a raw string

> Parameterizes inputs safely (turns $id into ? + bound parameter)

> Often adds compile-time or runtime checks

> Instead of manipulating strings, you’re working with a query AST / fragment system

And concluded by saying there's absolutely more detail, and that it's important to understand:

> If you tell me which library you’re using (Doobie, Slick, Quill, etc.), I can show exactly what guarantees sql"..." gives in your stack—those details matter quite a bit.

On vibe coded "garbage", I expect there won't be much of a market for such things (why would there be when you can also just vibe it?), so it will more be a personal computing improvement, which already limits the blast radius (and maybe already improves the situation vs the precarious-by-default SaaS/cloud proliferation today even with poor security). I also think tooling and vibe security will be better than median professional level by the time it's actually as easy as people claim it is to vibe code an application anyway. i.e. security (which is an active area of improvement to sell to professionals) will probably be "solved" before ease-of-use. Partly exactly because issues like code injection are already "solved" in better programming languages (which are also more concise and have better tooling/libraries in general), so the bot just needs to default to those languages.

[0] https://news.ycombinator.com/item?id=47693559


> But I delegate checks to tools all the time. e.g. I could spend my time checking whether locks are all used correctly in our code, or I could use libraries designed to force correctness[0].

Do you believe that because you can delegate some responsibilities without sacrificing important requirements that it follows that you can delegate all responsibilities without sacrificing important requirements? Do you not understand the difference between delegating to the computer proofs such as type checking that the computer can discharge faithfully without error and delegating something as wide and perilous as security to something as currently flawed as AI?

> An LLM isn't an ideal solution to linting, but if you're stuck with a language with a weak type system maybe that's all you can reasonably do.

No, in such a situation you can add LLM-based checks to your responsibility for security. But you can’t delegate away your responsibility to LLMs and say that you care about security. AI ain’t there yet.

> The actual problem is that you're using strings at all.

What percentage of the world’s existing code do you believe does not use strings at all? Tragically, that is the world we live in.

> Basically, work in the domain, not in the serialized representation (strings).

Sure, but you can’t do all your work in the domain. At some point you must take data from the outside world as input or emit data as output. And, even if you are lucky enough to work in a domain where someone has done the parsing and serialization and modeling work for you so that you have the luxury of a semantic model to work with instead of strings, who had to write that domain library? What rules did that person have to know to write that library without introducing security holes?

> [ChatGPT] did also several times indicate there was more to the story.

Great. Then show me how a person who didn’t know of the existence of the rules I shared with you in my previous post would naturally arrive at them by continuing your conversation with ChatGPT.

> security (which is an active area of improvement to sell to professionals) will probably be "solved" before ease-of-use.

I think that this is a naive hope. Security is different from virtually all other responsibilities in computing, such as ease of use, because getting it right 99.99% of the time isn’t good enough. In security, there is no “happy path”: it takes just one vulnerability to thoroughly sink a system. Security is also different because you must expect that adversaries exist who will search unceasingly for vulnerabilities, and they will use increasingly novel and clever methods. Users won’t probe your system looking for ease-of-use failures in the UI. So if you think that AIs are going to get security right before ease-of-use, I think you are likely to be mistaken.


That doesn't answer the question. It just restates the problem. Why aren't they doing diligence on what they're accepting from their business partners, or what types of partners they're working with? There's no reason they couldn't know the company deals with health data and place it under additional scrutiny.

Not just executives. They don't will these things into existence. Someone had to build functionality to send user data to Facebook.

Not to side with this behaviour, but I think if you consent to it in the Ts & Cs then it's legal. And that makes sense - otherwise how else do you agree to things or not agree to them?

The point of laws is that T&Cs don't matter if the law has something to say. If the law e.g. were to criminalize sharing health information in this way, then it doesn't matter if the users agreed; you still go to prison for doing it.

> if you consent to it in the Ts & Cs then it's legal.

No. In a paper contract, you can scratch off things you don't agree with. You can negotiate.

You can't do that in Ts & Cs. For example, Ts & Cs often unilaterally change with no ability for you to review or cancel or undo. It's trivially easy to write software which uses services without ever agreeing to Ts & Cs. So it's not really a legal contract.

> And that makes sense - otherwise how else do you agree to things or not agree to them?

Through a real negotiation. With a paper contract, that both parties sign, and both parties receive a copy of, and that can't be unilaterally changed.


> still sell the data

So add liability for the buyers of the data or any services derived from the data (e.g. targeted ads). Make it so large advertisers demand audits showing privacy laws are being followed. Also have personal criminal liability for people building and maintaining systems that collect, store, or process data for illegal purposes. Executives, PMs, engineers, the whole lot. Put them in prison if they continue.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: