Hacker Newsnew | past | comments | ask | show | jobs | submit | datakazkn's commentslogin

The exoskeleton framing resonates, especially for repetitive data work. Parts where AI consistently delivers: pattern recognition, format normalization, first-draft generation. Parts where human judgment is still irreplaceable: knowing when the data is wrong, deciding what 'correct' even means in context, and knowing when to stop iterating.

The exoskeleton doesn't replace instinct. It just removes friction from execution so more cycles go toward the judgment calls that actually matter.


And your muscles degrade, a pretty good analogy


Use the exoskeleton at the warehouse to reduce stress and injury; just keep lifting weights at home to not let yourself atrophy.


I guess so, but if you have to keep lifting weights at home to stay competent at your job, then lifting weights is part of your job, and you should be paid for those hours.


The hallucination-in-analysis problem is real and often undersold. Pattern that works well: use the LLM only to structure already-extracted data (parse fields, normalize formats), then apply deterministic logic for anything numerical. That way the LLM is doing classification/extraction where it's reliable, and you're not trusting it to compute or compare values where it isn't.


One underappreciated reason for the agentic gap: Gemini tends to over-explain its reasoning mid-tool-call in a way that breaks structured output expectations. Claude and GPT-4o have both gotten better at treating tool calls as first-class operations. Gemini still feels like it's narrating its way through them rather than just executing.


I agree with this; it feels like the most likely tool to drop its high-level comments in code comments.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: