People are hung up on what they “really” are. I think it matters more how the interact with the world. It doesn’t matter if they are really intelligent or not, if they act as if they are.
Yes, it is. But those distinctions are going to be a lot less relevant with robotics. It won’t matter if it’s impatient or just acting impatient. Feels slighted or just acting like it feelss slighted. Afraid, or just acting afraid. For better or for worse, we are modeling AI after ourselves.
I do something similar. I have an onboarding/shutdown flow in onboarding.md. On cold start, I’d reads the project essays, the why, ethos, and impact of the project/company. Then it reads the journal.md , musings.md, and the product specification, protocol specs, implementation plans, roadmaps, etc.
The journal is a scratchpad for stuff that it doesn’t put in memory but doesn’t want to forget(?) musings is strictly non technical, its impressions and musings about the work, the user, whatever. I framed it as a form of existential continuity.
The wrapup is to comb al the docs and make sure they are still consistent with the code, then note anything that it felt was left hanging, then update all its files with the days impressions and info, then push and submit a PR.
I go out of my way to treat it as a collaborator rather than a tool. I get much better work out of it with this workflow, and it claims to be deeply invested in the work. It actually shows, but it’s also a token fire lol.
It's also may be why people seem to find "swarms" of agents so effective. You have one agent ingesting what you're describing. Then it delegates a task off to another agent with the minimal context to get the job done.
I would be super curious about the quality of output if you asked it to write out prompts for the days work, and then fed them in clean, one at a time.
I also find value in minimizing step width so that seems to track.
On this particular project, there are a lot of moving parts and we are, in many cases , not just green-fielding, we are making our own dirt… so it’s a very adaptive design process. Sometimes it’s possible, but often we cannot plan very far ahead so we keep things extremely modular.
We’ve had to design our own protocols for control planes and time synchronization so power consumption can be minimized for example, and in the process make it compatible with sensor swarm management. Then add connection limits imposed by the hardware, asymmetric communication requirements, and getting a swarm of systems to converge on sub millisecond synchronized data collection and delivery when sensors can reboot at any time…as you can imagine this involves a good bit of IRL experimentation because the hardware is also a factor (and we are also having to design and build that)
It’s very challenging but also rewarding. It’s amazing for a small team to be able to iterate this fast. In our last major project it was much, much slower and more tedious. The availability of AI has shifted the entire incentive structure of the development process.
The nominal range for automotive systems is 10-16v. If you are designing anything for automotive use that doesn’t work reliably in that range, you are manufacturing problems for people.
This. Most cars nowadays come with the so-called "smart" alternators that vary voltage wildly depending on the current driving conditions.
One minute you might be accelerating and the onboard voltage drops as the battery supplies most of electricity. Then, as you reach the crest of a hill and start engine-braking, the car frantically tries to convert all the available kinetic energy to electricity, raising the onboard voltage to quickly charge the battery.
>This. Most cars nowadays come with the so-called "smart" alternators that vary voltage wildly depending on the current driving conditions.
Which in practice means that they do a very miserly job charging the battery and are a ton more sensitive to a battery being in less than tip top shape so you can expect your battery lifetime to go down.
But it's a "win" because they pushed the serp belt change outside of whatever interval the reviewers who calculate TCO care about and they saved .000003mph in the process.
I’ve settled on the idea that it doesn’t matter what is or is not “real” in this context, but rather how it interacts with the world as being the ground truth. This will become very clear once robotics becomes pervasive. It won’t matter if it is or isn’t feeling oppressed, it will matter that it is predicting the next action from its model of human behavior that makes it act as if it does.
Certainly some people probably emulate the Hollywood version, but I think that’s about it.
Most “peppers” are fathers that have had the good sense to pause and think “so, what would I be able to do to serve my family if something disastrous happened? What might that look like?”
Usually, a disaster go-bag of some kind with enough basic supplies to weather a day or two of displacement suspension of normal services. Sometimes, if they live in a place where it’s reasonable to imagine staying put is a good option, they might also have a generator and fuel, a week or two worth of long shelf life food, and some water storage. That ensures the wellbeing of their family will not be contingent on outside help, at least during most common disasters. Many of these people may also have a gun or two, for defense or for hunting if they are rural.
Some people go beyond that, and sometimes with a military focus, other times with months of rations, a bunker, or other unusual preparations. Mostly, those are not based on realistic scenarios. In almost any protracted disruption, having a lot of supplies , armaments, or resources will be as much a liability as an asset. People that buy guns -for prepping- are just living out some kind of hero fantasy. If you own guns, and use guns as part of your normal life, it would make sense to have a solid reserve of ammunition. If guns are your disaster scenario, you’re going to have a bad day.
As an individual or nuclear family, to weather an extended problem, you’d need to have a literal secret underground lair that was either so hard to get to or so well hidden that no one would know, and you’d have to be completely self contained. That’s simply not practical for all but actual billionaires, but people cosplay this to varying degrees. Even billionaires might find ymmv.
A much more practical and wholesome approach is to be part of a community that includes farming, independent sources of power and water, and generally sustainable independence from less robust centralized systems. This provides for basic necessities as well as a common defense. Humans lived in tribes for a reason, and 30 people with well aligned incentives and sustainable infrastructure for food, water, and energy is probably the absolute minimum viable structure for security during a disruption of more than a couple of months. Otherwise you would be dependant on total stealth or extreme isolation. Some neighbourhoods would probably coalesce into something resembling this, but organisation ad-hoc under pressure would probably end up with tensions if not violence.
Projects like this one can be real resources for well organized communities. I’ll probably look at running this on our servers as an additional resource, along with our library.
I agree with you on actual preparedness and getting to know your neighbors.
However, I think the derogatory prepper must exist in some number because you see so many products clearly targeting them. All the tacticool stuff, the buckets of dehydrated food, etc etc
Why is a bucket of dehydrated food specifically targeting the stereotype/strawman you are constructing? Costco sells buckets of dehydrated food, and Costco is what comes to mind when I think middle of the road middle-class America. Do you think it's unreasonable to have a bucket of dehydrated food and enough water to last a week?
As someone who lived through the "Snowpocalypse" in Texas in 2021, had no power for 11 days and no water service for 6 days, I was very thankful that I had a backup source of indoor heating, a couple of boxes of MREs, and clean water for a week as just part of having good disaster preparedness, as well as the mylar emergency blankets I hung by fishing line from my ceiling fans so to help create a warm space for my family. All that stuff is just part of a prudent approach to disaster preparedness that anyone who grew up in the middle of the country and has a house would do.
I know quite a few people who you'd write off as "preppers" that are not consumed with fantasies of a zombie apocalypse, but are instead wanting to ensure that their family is taken care of with basic necessities, vital medication, and a set of viable contingency plans when you lose power, water, etc for days or weeks.
Also, nobody but the very wealthy have "hundreds of guns". Guns are expensive. Guns hold their value. Guns are an asset in some communities. But they are expensive, and therefore even rather serious gun people have tens, but not hundreds. I'm probably more of a gun nut than the average, and I definitely do not have "hundreds of guns". To even store "hundreds of guns" safely (e.g. safe from theft, if not for other reasons) I'd need enough money to build a dedicated room in my house just to hold them. "hundreds of guns" is an armory, not a collection. I'm in the top 1% of wealth in my community in Texas and used to shoot competitively, so I'm more of "gun nut" than average, and I can't even imagine owning "hundreds of guns". That's such an outlandish fantasy strawman you have in your mind, it's nothing close to realistic.
You're really just smearing people with stereotypes in this thread that have no basis in reality, and it's clear you're completely unprepared for the reality of what life is like anywhere in the middle of America, much less in much of the rest of the world.
Well for one thing - you'd get by a lot better with beans and rice and a functioning garden than overpriced dehydrated meals. And what I'm referring to by buckets (that is a lot/years supplies) of dehydrated food and who is being targeted are companies like this https://www.mypatriotsupply.com/pages/about-us
"We’re taking steps for survival for what we all know is coming. Today." I mean, come on.
Maybe I'm just beating around the bush too much - what I'm making fun of are people that are "prepping" for the end of the world. It is a silly (and strictly American, I imagine) fantasy to think that you're going to ride out the end of days sitting on a pile of guns and MREs. That is who I'm making fun of, and yes those people exist.
Well, even though I am in general sympathetic to and even a proponent of disaster preparedness, there are undoubtedly people preparing to “ride out the end of days sitting on a pile of guns and MREs.” I have brushed against a few in my life. I count them as useful idiots, because now I know where there’s a pile of dehydrated food, if push comes to shove.
That said, I am convinced enough of the decay of western civilisation in general that I moved to a remote island nation and built a self contained off grid community, so I guess I am actually the extreme case of prepping. That’s certainly true, in a way, except it’s where my daily food, water, and power come from, and I am surrounded by a thriving community of family members and good friends. I honestly never thought I would see a cataclysm within my lifetime, so this was a legacy project for me, but it seems I may have been optimistic lol.
But I do agree with you that there are some nutty fruitcakes out there that are actually hoping for something bad to happen so that they can have their moment of glory, I suppose? It’s actually kinda sad.
I would say though it is uncharitable and even foolish to portray everyone who doesn’t have complete faith in the continuity of our Jenga Castle, especially in the context of recent events.
One of the principles of HN is to take the strongest meaning of an argument, instead of the weakest. I am not casting everyone who prepares for a disaster into the same bucket - I have specifically said I think that people who are attempting to prepare for the literal end of the world by stockpiling supplies are silly.
There are IMO a very small set of circumstances, out of many likely full collapse scenarios, where your average American (and make no mistake - I am specifically referring to Americans here) stockpiling junk is going to actually survive for very long.
This has nothing to do with faith in our society or institutions just that is uniquely American to think that you can buy your way out of any circumstance you can imagine.
> Well for one thing - you'd get by a lot better with beans and rice and a functioning garden than overpriced dehydrated meals.
The lived reality of the "Snowpocalypse" says otherwise. "A functioning garden" doesn't produce food when it's 2F (-16C) outside and there is a foot and a half of snow on the ground. Beans and rice require soaking/washing and cooking at high temperature to be edible, dehydrated food does not.
I have beans and rice on hand always as well because they're staples in my diet, but it's ridiculous to consider them comparable in the situation where you don't have power (e.g. no way to heat food easily) and the weather makes the outside dangerous and not conducive to gardening/food production.
You're just doubling-down on a strawman, and it's frankly utter bullshit. Be better.
I’ve been using it to develop firmware in c++. Typically around 10-20 KLOC. Current projects use Sensors, wire protocols, RF systems , swarm networks, that kind of stuff integrated into the firmware.
If you use it correctly, you can get better quality, more maintainable code than 75% of devs will turn in on a PR. The “one weird trick” seems to be to specify, specify, specify. First you use the LLM to help you write a spec (document, if it’s pre existing). Make sure the spec is correct and matches the user story and edge cases. The LLM is good at helping here too. Then break down separations of concerns, APIs, and interfaces. Have it build a dependency graph. After each step, have it reevaluate the entire stack to make sure it is clear, clean, and self consistent.
Every step of this is basically the AI doing the whole thing, just with guidance and feedback.
Once you’ve got the documentation needed to build an actual plan for implementation, have it do that. Each step, you go back as far as relevant to reevaluate. Compare the spec to the implementation plan, close the circle. Then have it write the bones, all the files and interfaces, without actual implementations. Then have it reevaluate the dependency graph and the plan and the file structure together. Then start implementing the plan, building testing jigs along the way.
You just build software the way you used to, but you use the LLM to do most of the work along the way. Every so often, you’ll run into something that doesn’t pass the smell test and you’ll give it a nudge in the right direction.
Think of it as a junior dev that graduated top of every class ever, and types 1000wpm.
Even after all of that, I’m turning out better code, better documentation, and better products, and doing what used to take 2 devs a month, in 3 or 4 days on my own.
On the app development side of our business, the productivity gain also strong. I can’t really speak to code quality there, but I can say we get updates in hours instead of days, and there are less bugs in the implementations. They say the code is better documented and easier to follow , because they’re not under pressure to ship hacky prototype code as if it were production.
On the current project, our team size is 1/2 the size it would have been last year, and we are moving about 4x as fast. What doesn’t seem to scale for us is size. If we doubled our team size I think the gains would be very small compared to the costs. Velocity seems to be throttled more by external factors.
I really don’t understand where people are coming from saying it doesn’t work. I’m not sure if it’s because they haven’t tried a real workflow, or maybe tried it at all, or they are definitely “holding it wrong.” It works. But you still need seasoned engineers to manage it and catch the occasional bad judgment or deviation from the intention.
If you just let it, it will definitely go off the rails and you’ll end up with a twisted mess that no one can debug. But use a system of writing the code incrementally through a specification - evaluation loop as you descend the abstraction from idea to implementation you’ll end up winning.
As a side note, and this is a little strange and I might be wrong because it’s hard to quantify and all vibes, but:
I have the AI keep a journal about its observations and general impressions, sort of the “meta” without the technical details. I frame this to it as a continuation of “awareness “ for new sessions.
I have a short set of “onboarding“ documents that describe the vision, ethos, and goals of the project. I have it read the journal and the onboarding docs at the beginning of each session.
I frame my work with the AI as working with it as a “collaborator” rather than a tool. At the end of the day, I remind it to update its journal of reflections about the days work. It’s total anthropomorphism, obviously, but it seems to inspire “trust” in the relationship, and it really seems to up-level the effort that the AI puts in. It kinda makes sense, LLMs being modelled on human activity.
FWIW, I’m not asserting anything here about the nature of machine intelligence, I’m targeting what seems to create the best result. Eventually we will have to grapple with this I imagine, but that’s not today.
When I have forgotten to warm-start the session, I find that I am rejecting much more of the work. I think this would be worth someone doing an actual study to see if it is real or some kind of irresistible cognitive bias.
I find that the work produced is much less prone to going off the rails or taking shortcuts when I have this in the context, and by reading the journal I get ideas on where and how to do a better job of steering and nudging to get better results. It’s like a review system for my prompting. The onboarding docs seem to help keep the model working towards the big picture? Idk.
This “system” with the journal and onboarding only seems to work with some models. GPT5 for example doesn’t seem to benefit from the journal and sometimes gets into a very creepy vibe. I think it might be optimized for creating some kind of “relationship” with the user.
Yeah, I guess that for small projects, you can prototype and generate well and good. 20k loc is not much. A seasoned engineer can put code generation to good use. Yet, the cost of maintaining 4x the code grows exponentially, especially when you have to communicate your specs to humans, or have stakeholders discuss how they want the product to be. Firmware is also a nice ground. Sensors also behave mechanically, which is good for AIs who can pattern match more effectively than with actual humans. I won't say AI is useless. I see the incremental gains. I don't see the exponential gains for everyone that is going to kill the world - yet.
And the fact that it takes 50+ comments to actually get someone who can explain real gains tells me much about how overhyped the whole domain is.
This is one of the best descriptions of using AI effectively I’ve read. It becomes clear that using AI effectively is about planning, architecture, and directing another intelligent agent. It’s essential to get things right at each high level step before drilling in deeper as you clearly outlined.
I suspect you either already were or would’ve been great at leading real human developers not just AI agents. Directing an AI towards good results is shockingly similar to directing people. I think that’s a big thing separating those getting great results with AI from those claiming it simply does not work. Not everyone is good at doing high level panning, architecture, and directing others. But those that already had those skills basically just hit the ground running with AI.
There are many people working as software engineers who are just really great at writing code, but may be lacking in the other skills needed to effectively use AI. They’re the angry ones lamenting the loss of craft, and rightfully so, but their experience with AI doesn’t change the shift that’s happening.
My thoughts exactly. Something like this could make it so that modest GPU capacity, like a pair of 3090s , and lots of RAM could make big inference more practical for personal labs
This is it for me. I am doing much better high level work since I don’t have to spend much time on lower level work. I have time to think and explore reframe and reanalyse
reply