Hacker Newsnew | past | comments | ask | show | jobs | submit | bestcommentslogin

"Batteries included" ecosystems are the only persistent solution to the package manager problem.

If your first party tooling contains all the functionality you typically need, it's possible you can be productive with zero 3rd party dependencies. In practice you will tend to have a few, but you won't be vendoring out critical things like HTTP, TCP, JSON, string sanitation, cryptography. These are beacons for attackers. Everything depends on this stuff so the motivation for attacking these common surfaces is high.

I can literally count on one hand the number of 3rd party dependencies I've used in the last year. Dapper is the only regular thing I can come up with. Sometimes ScottPlot. Both of my SQL providers (MSSQL and SQLite) are first party as well. This is a major reason why they're the only sql providers I use.

Maybe I am just so traumatized from compliance and auditing in regulated software business, but this feels like a happier way to build software too. My tools tend to stay right where I left them the previous day. I don't have to worry about my hammer or screw drivers stealing all my bitcoin in the middle of the night.


Tim from the Copilot coding agent team here. We've now disabled these tips in pull requests created by or touched by Copilot, so you won't see this happen again for future PRs.

We've been including product tips in PRs created by Copilot coding agent. The goal was to help developers learn new ways to use the agent in their workflow. But hearing the feedback here, and on reflection, this was the wrong judgement call. We won't do something like this again.


Because it's Microsoft. Categorically incapable of respecting their users.

What's interesting to me is how many people went like 'Oh, Satya really gets open source, this time it will be different'.

https://news.ycombinator.com/item?id=17225599


I can't even imagine the scale of the impact with Axios being compromised, nearly every other project uses it for some reason instead of fetch (I never understood why).

Also from the report:

> Neither malicious version contains a single line of malicious code inside axios itself. Instead, both inject a fake dependency, plain-crypto-js@4.2.1, a package that is never imported anywhere in the axios source, whose only purpose is to run a postinstall script that deploys a cross-platform remote access trojan (RAT)

Good news for pnpm/bun users who have to manually approve postinstall scripts.


> We won't do something like this again.

Microsoft has been pulling user hostile crap for decades, so either "we" or "like this" (or both) is probably not super accurate. ;)


I don't know why you are asking about pedophiles and sex trafficing - The Dow is over 50,000 right now, the S&P is at almost 7,000, and the Nasdaq is smashing records. Americans' 401(k) savings are booming, thats what we should be talking about.

Undercover mode also pretends to be human, which I'm less ok with:

https://github.com/chatgptprojects/claude-code/blob/642c7f94...


The Android verification is such a broken experience. Recently I decided to purchase a dev account for my company, so far:

1) Provided my company DUNS number etc. once to create the payment profile. I did this some times ago, don’t remember the details but it was an involved verification process and it is marked as verified business payment profile.

2) Later on the payment step verified myself with a passport and bank statement to be able to actually pay with a proper HSBC bank card. Not shady pre-paid card or something, those are not accepted anyway.

3) After I paid I was told that now I need to verify my identity once more but this time with the passport and the incorporation certificate or some other company document.

fingers crossed that in few days it will be verified. While waiting, it tells me that there are still website and email verification to do once the previous step is done. I already verified my e-mail a few times before paying.

It’s painful, slow and annoying because if you fail at a step(i.e. needs verification that takes days and you are told about it at the payment step) you have to start again with the forms.

I just remembered why I never use Android. It seems like no one owns the process and as a result you get unpolished shitty experience that fulfills the requirements of god knows how many people who work in the same company but don’t talk to each other.


If you play a single round of Russian roulette with a revolver, it is likely you will not die, but it is also not safe to do that. The same idea applies here.

The foam shedding/impact problem was heavily analyzed throughout the Shuttle program, and recognized as a significant risk. Read the CAIB report for a good history.

That report also describes the groupthink dynamic at NASA that made skeptical engineers "come around" for the good of the program in the past. Calling Camarda an outlier is just a different way of stating this problem.


src/cli/print.ts

This is the single worst function in the codebase by every metric:

  - 3,167 lines long (the file itself is 5,594 lines)
  - 12 levels of nesting at its deepest
  - ~486 branch points of cyclomatic complexity
  - 12 parameters + an options object with 16 sub-properties
  - Defines 21 inner functions and closures
  - Handles: agent run loop, SIGINT, rate-limits, AWS auth, MCP lifecycle, plugin install/refresh, worktree bridging, team-lead polling (while(true) inside), control message dispatch (dozens of types), model switching, turn interruption
  recovery, and more
This should be at minimum 8–10 separate modules.

I agree. It's not an advertisement, it's simply a piece of information about your particular choice of technology.

--------------

Sent from HackerNews Supreme™ - the best way to browse the Y Combinator Hacker News. Now on macOS, Windows, Linux, Android, iOS, and SONY BRAVIA Smart TV. Prices starting at €13.99 per month, billed yearly. https://hacker-news-supreme.io


I actually love these ads and also the way Claude injects itself as a co-author.

Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.

I think we should continue encouraging AI-generated PRs to label themselves, honestly.

I’m not against AI coding tools, but I would like to know when someone is trying to have the tool do all of their work for them.


I’ll never understand why they ruined GitHub. They had everything they needed - the one place in the world where 99% of open source projects were hosted, where all the discussions happened. A product that people were so used to that it was a no brainer when it came to hosting private repos. And they had to ruin it and give space to GitLab and other competitors. What a waste…

I find most developers fall into one of two camps:

1. You treat your code as a means to an end to make a product for a user.

2. You treat the code itself as your craft, with the product being a vector for your craft.

The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.

Personally, I fall into the first camp.

No one has ever made a purchasing decision based on how good your code is.

The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.

With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).

I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.


Because they want it to be executed quickly and cheaply without blocking the workflow? Doesn’t seem very weird to me at all.

I asked copilot how developers would react if AI agents put ads in their PRs.

>Developers would react extremely negatively. This would be seen as 1. A massive breach of trust. 2. Unprofessional and disruptive. 3. A security/integrity concern. 4. Career-ending for the product. The backlash would likely be swift and severe.

Sometimes AI can be right.


> nearly every other project uses it for some reason instead of fetch (I never understood why).

Fetch wasn't added to Node.js as a core package until version 18, and wasn't considered stable until version 21. Axios has been around much longer and was made part of popular frameworks and tutorials, which helps continue to propagate it's usage.


Please call it what it is and always has been:

   I.N.S.T.A.L.L.I.N.G   S.O.F.T.W.A.R.E
"side load" is like "jay walking' seeks to stigmatize humans being human.

In chapter 11 of All Quiet on the Western Front Paul and his unit find an abandoned food cache in the middle of no mans land. Instead of secreting away the food back to their lines where they will have to share it, they decide to just cook and eat it right then and there. But a spotter plane from the allies sees the smoke and then begins shelling their position. Cue a terrifying, if hilarious, scene where the soldiers try and cook pancakes as shells explode around them. Paul, as the last to leave, takes his pancakes on a plate and dashes out, timing his escape between bursts, and just barely making it back to the German trenches. Its a rare comic scene in an otherwise horrific and very real look at WW1.

The scene in the book is just so familiar to the lines in Ukraine these days, nearly a hundred years later. Instead of spotter planes near the dawn of aviation, we have satellites and drones (similarly quite new in the role). Instead of just shells and fuzing experts, we have FPV drones and much more sophisticated shells. Instead of buddies from the same towns all huddled together in cold muddy holes, we have deracinated units spread far and wide in laying in fear of thermal imaging. This results in a no mans land again, but a dozen kilometers wide instead of a few hundred meters wide, and somehow more psychologically damaging.

My point is that absent any tech that will miraculously be invented and deployed widely in the new few weeks, the Iran war, if it should be a ground one, is going to be just like Ukraine is today, which is somehow a worse version of trench warfare.

Even casual Victoria II players know that WW1 is essentially the final boss of the game. And the 'lesson' of Vicky II is essentialy: Do not fight WW1, it ruins Everything.

To be clear: The US is choosing to fight a worse version of WW1 without even a stated (or likely even known) condition of victory. We're about to send many thousands boys to suffer and die for not 'literally nothing', but actually literally nothing.


Hey boss: “40M users are running a cracked version of YouTube premium on mobile, what can we do ?”

Wasn’t diversifying US energy sources also a national security issue? And wind energy was set aside because, wait for it, they killed animals. Birds to be specific.

> You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.

Exactly, thank you for putting it like that.

So far it’s been my observation that it’s only the people who think like the OP who put the situation in the terms they did. It’s a false dichotomy which has become a talking point. By framing it as “there are two camps, it’s just different, none of them is better”, it lends legitimacy to their position.

For an exaggerated, non-comparable example meant only to illustrate the power of such framing devices, one could say: “there are people who think guns should be regulated, and there are people who like freedom”. It puts the matter into an either/or situation. It’s a strategy to frame the conversation on one’s terms.



Hmm, green account, no comments or submissions, generated website for an unsigned app with power user features. That’s a no from me dawg.

Blame it on whatever you like. oracle has been a rudderless leech for nearly 30 years now.

- overpricing the database led to a predictable exodus and new players with often times better performance.

- acquisition of MySQL led to a predictable exodus and new players like maria with often times better performance.

- Oracle cloud arrived late to spectacular skepticism and low user turnout from customers who had been burned by high cost and users burned from decisions like the death of opensolaris. it exists on federal life support these days by the grace of the prevailing administration.

- more than 80 products, with hundreds of thousands of patches and updates, yet no coherent or meaningful reform of the build for more than forty years. DB 19c still ships broken for redhat 9 as a means of driving users to oracle linux, and patching the installer is a 1970s experience in itself. DB 23's greatest improvement has been to tack the letters "AI" onto it to chum what shallow AI waters Oracle deigns to tread outside of an investment portfolio.

- dumping cash into oracle enterprise linux despite it only having around 2500 active corporate users.

this is nearly 20% of the company being laid off.


This is a great writeup! Perhaps I can put in a plug for the create_ap script which I have been maintaining for many years (http://github.com/dlenski/create_ap).

It's a shell script that allows you to turn any ol' Linux computer into a WiFi router in one quick command-line:

By default, it will setup your WiFi card as an access point (allows WPA2/3, MAC filtering, etc), setup packet forwarding and routing, and run a DHCP and DNS server. It will generally pick sensible defaults, but it's also highly customizable. If your WiFi card supports simultaneous AP and client mode, it will allow that.

Its requirements are extremely minimal: basically just Linux, a compatible wireless card, and a few common configuration packages (hostapd, iw, iproute2, iptables, dnsmasq). No NetworkManager needed.

I used it as my own home Internet gateway for many years, running on an ancient fanless Atom mini-PC.

Because it can quickly setup and teardown WiFi networks on-the-fly, it's also a valuable tool for setting up test networks when reverse-engineering IoT devices. I use it frequently for this purpose (see https://snowpatch.org/posts/i-can-completely-control-your-sm...).


Lots of "just use X" comments but the article is about showing the bare minimum/how easy the core part of routing actually is.

Also, if you have ever used docker or virtual machines with NAT routing (often the default), you've done exactly the same things.

If you have ever enabled the wifi hotspot on an android phone also, you've done pretty much what the article describes on your phone.

All of these use the same Linux kernel features under the hood. In fact there is a good chance this message traversed more than one Linux soft router to get to your screen.


Agents and I apparently have a whole lot in common.

Only being half ironic with this. I generally find that people somehow magically manage to understand how to be materially helpful when the subject is a helpless LLM. Instead of pointing it to a random KB page, they give it context. They then shorten that context. They then interleave context as comments. They provide relevant details. They go out of their way to collect relevant details. Things they somehow don't do for their actual colleagues.

This only gets worse when the LLM captures all that information better than certain human colleagues somehow, rewarding the additional effort.


This is actually a good fit for a Wikidata SPARQL query you can run here https://query.wikidata.org/:

  SELECT ?work ?workLabel ?author ?authorLabel ?publicationDate ?ageAtPublication
  WHERE {
    ?author wdt:P569 ?birth .
    ?author wdt:P570 ?death .
    ?author wdt:P800 ?work .
  
    ?work wdt:P50 ?author ;
          wdt:P31 wd:Q47461344 ;
          wdt:P577 ?publicationDate .
  
    FILTER(?publicationDate <= ?death)
  
    BIND(YEAR(?publicationDate) - YEAR(?birth) AS ?ageAtPublication)
    FILTER(?ageAtPublication > 60)
  
    SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
  }
  ORDER BY DESC(?ageAtPublication)
  LIMIT 300

Amusingly, they deprecated it with a message of "Unpublished" instead of actually unpublishing it [1]. When you use npm unpublish it removes the package version from the registry, when you use npm deprecate it leaves it there and simply marks the package as deprecated with your message. I have to imagine the point was to make it harder for people to download the source map, so to deprecate it with this message gives off a bit of claude, unpublish the latest version of this package for me vibe.

[1] - https://www.npmjs.com/package/@anthropic-ai/claude-code/v/2....


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: