The key point for me was not the rewrite in Go or even the use of AI, it was that they started with this architecture:
> The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our Go services call over RPC. That meant that for every event (and expression) we had to serialize, send over the network, evaluate, serialize the result, and finally send it back.
> This was costing us ~$300K/year in compute, and the number kept growing as more customers and detection rules were added.
For something so core to the business, I'm baffled that they let it get to the point where it was costing $300K per year.
The fact that this only took $400 of Claude tokens to completely rewrite makes it even more baffling. I can make $400 of Claude tokens disappear quickly in a large codebase. If they rewrote the entire thing with $400 of Claude tokens it couldn't have been that big. Within the range of something that engineers could have easily migrated by hand in a reasonable time. Those same engineers will have to review and understand all of the AI-generated code now and then improve it, which will take time too.
I don't know what to think. These blog articles are supposed to be a showcase of engineering expertise, but bragging about having AI vibecode a replacement for a critical part of your system that was questionably designed and costing as much as a fully-loaded FTE per year raises a lot of other questions.
I got through this entire article before I realized it was written by someone I worked with back in my agency days. Beth is an awesome designer with a great eye. Nice to see her on the front page here. Now, to the content: I often wonder how much we have lost with our endless quest for minimalism. We can't even make buttons look like buttons anymore. Affordances have become anemic at times. Designers who think and care deeply about functional color theory and usable design should be cherished.
This "make Windows better" push is far more political than technological. It's a fight with other divisions about using Windows as a marketing and sales channel for other products and services.
It has to be a decision from the very top. I hope they realize that Windows is in significant danger, the majority market share for Desktop OS is not guaranteed anymore. It's not just 10% of revenue, it's a foundation for how enterprises ended up on Azure and are bringing big money.
I'm still a Windows power user, MacBook is a wonderful piece of hardware and I'm typing this on one, but I'm not nearly as productive as on multimonitor PC with TotalCommander and Visual Studio where I use all the shortcuts subconsciously.
> If your data is stored in a database that a company can freely read and access (i.e. not end-to-end encrypted), the company will eventually update their ToS so they can use your data for AI training — the incentives are too strong to resist
I had a friend that wanted to scan the cover of his album to start selling copies of it online. This would have been in like 1995 maybe. I went out and bought a HP ScanJet and wrote a command-line program run the scanner and grab that image for him.
I started thinking about making a GUI companion to it. I kept thinking "I need to do this like xv does, I need to do that like xv does." I finally realized: What if I just added a scanning screen to Xv? But because of the license, I couldn't just release it as open source.
I contacted John Bradley, thinking it was probably a long shot that he'd answer. But he did, and he accepted my idea: I'd sell xv with scanning for $50, and send him half. Real nice guy, though the majority of our interaction was me just sending him periodic checks.
I had a domain, tummy.com, because it was a fun name for a fat guy, and when I registered the domain my provider (back in the early '90s) wouldn't let me register a .org unless I was a non profit org, so I went with .com. Because of this deal with John Bradley, I registered tummy.com as an LLC to start selling this software. Over around a decade, I sent John well into the 5 digits of licensing fees. Mostly it was one-offs, but there were a few organizations where it was handfulls of copies for their site.
I had done that software in the evenings while I did a contracting gig at the Telco (USWest). When that contract was up, I was tired of working for a giant company, so I wanted to start doing Linux sys admin consulting. So I started doing that under the tummy.com brand. Did that for around 20 years until around a dozen years ago.
The craziest thing here is that online gambling has been legal in the UK and Ireland for many years, and it's been such an obvious negative for those countries — and had been optimized brutally like any other tech product. When I moved over to the US a decade ago, I remember thinking 'well at least they're smart enough to have banned online gambling'.
I am very pro personal liberties, but this stuff is weaponized to prey on a subset of humanity. I'm in senior leadership, and have made it clear that anyone who has worked on these products should not be hired.
The attention this topic receives is disproportionate considering how rare we are, especially close to the Olympics level.
Most of us do sports for fun/friends and don’t care how they rank us, but would be sad to be banned.
There might be more “biological advantage” nuance with people just starting their transition, but by this many years in it feels silly. I registered as a man for the last event in case anyone might get upset, the staff changed it to say “woman” when I got there anyways, and then I lost to a woman twice my age.
"Three clicks convert a data point on the map into a formal detection and move it into a targeting pipeline. These targets then move through columns representing different decision-making processes and rules of engagement. The system recommends how to strike each target – which aircraft, drone or missile to use, which weapon to pair with it – what the military calls a “course of action”. The officer selects from the ranked options, and the system, depending on who is using it, either sends the target package to an officer for approval or moves it to execution."
----------------
Maven is a tool for use in the middle of a war. When both sides are firing, minutes saved can mean lives saved for your side. Those lives, at least partly, balance the risks of hitting a bad target.
This was not a strike made in the middle of a war. If Maven was used in the strike that took out a school, it was being used as part of a sneak attack. Nobody was shooting back while this was being planned. Minutes saved were not lives saved. There should have been a priority placed on getting the targets right. Humans should have been double and triple checking every target by other means. This clearly didn't happen. The school was obviously a school that even had its own website. Humans would have spotted this if they had done more than make their three clicks and move on to the next target.
Whoever made the choice to use Maven to plan a sneak attack without careful checking made an unforced error when they had all the time in the world to prevent it. Whether it was overconfidence in their tools or a complete disregard for the lives of civilians that caused this lapse, they are directly responsible for the deaths of those little girls. I sincerely hope there are (although I doubt there will be) consequences for this person beyond taking that guilt to their grave.
Karen woke up this morning in her run down, rented flat. She briefly looks at the collections letter that showed up yesterday due to an unaffordable repair she had to pay for on her credit card. Another letter from her ex-partner's lawyer. As she rushes out the door (she spilled coffee on her one nice sweater, her favorite) her mom flashes through her mind... "What about mum?". She arrives at the office. It is an oppressive, sterile government office. She tries to ignore the overwhelming sense of helplessness and sits down to begin working. Her first call is a person screaming at her about their benefits. She has no power, absolutely no power, to help them due to the rules imposed on her by her superiors, but has to take the abuse regardless and explain the process she has no control over to them. The next call is a case she actually is familiar with: a person claiming to be disabled to collect dole. They aren't, but she has been told that this is a special case and she must work with them. She complies. She sits back in her chair and the phone rings again. An upset person on the other end...
Building your AI agent "toolkit" is becoming the equivalent of the perfect "productivity" setup where you spend your time reading blog posts, watching YouTube videos telling you how to be productive and creating habits and rituals...only to be overtaken by a person with a simple paper list of tasks that they work through.
Plain Claude, ask it to write a plan, review plan, then tell it to execute still works the best in my experience.
"DM: We exist as a band because we sell t-shirts. Our job is that we sell t-shirts and the way we promote those t-shirts is by playing music. If we were talking strictly economically, that’s just a fact.
LL: Weirdly, it’s also our most direct engagement with the money we make and with our fans. We’re often selling our own shirts at the merch table; that’s actually how we talk to a lot of fans and get feedback on our sets. We get cash in our hands; that’s one of the most direct economic exchanges in our lives as musicians. So, it is funny because it seems cynical, but it’s actually one of the more grounded exchanges in what we do."
As it turns out, I had a nice little chat with their drummer when I bought one of their tshirts.
As someone with a sizeable background in Linux system engineering.. I prefer Windows to MacOS.
It's IMHO a better desktop now with the edge snap tile layout and etc. Excellent device compatibility. And I get my linux environment needs satisfied via WSL2 these days.
But damn if they don't get in their own way. I have my own Pro licenses, and even with Pro turning off ads and features is text book whack-a-mole:
* Frequent "Let's finish setting up your PC" after updates
* Killing OneDrive is a like night of the living dead
* Edge popping up "ads" asking you if you want to pin apps when it closes(a lot of windows apps wrap edge, like streaming apps, and show this too on close!)
* Scary Power Automate crap getting injected on updates(haven't seen this in a while)
* Internet search results in the "Home" search
* Random popups and product recommendations
* Registry disabled "features" randomly resurrecting after Windows update
Holy. Hell.
Edit: I recall now; Windows was installing a power automate extension into Chrome during Windows Update un-prompted last year. Caused a minor panic.
100% and I’m a software developer and have been for ~30 years. Good QA people know how to find regression and bugs _that you didn’t think about_ which is the whole reason why it shouldn’t be under “engineering” and that it should exist. One of the QA people I work with currently is one of my favorite people. They don’t always make me happy (in the moment) with their bugs or with how they decide to break the software, but in the end it makes a better, more resilient product.
> And now here come the prediction markets, such as Polymarket and Kalshi, whose combined 2025 revenue came in around $50 billion.
Bizarre to call trading volume "revenue". Last year, trading fees for Kalshi amounted to about $263 million[0], whereas Polymarket largely did not have fees in 2025 and is turning them on in a few days[1].
I bet there’s gonna be a banger of a Mac Studio announced in June.
Apple really stumbled into making the perfect hardware for home inference machines. Does any hardware company come close to Apple in terms of unified memory and single machines for high throughput inference workloads? Or even any DIY build?
When it comes to the previous “pro workloads,” like video rendering or software compilation, you’ve always been able to build a PC that outperforms any Apple machine at the same price point. But inference is unique because its performance scales with high memory throughput, and you can’t assemble that by wiring together off the shelf parts in a consumer form factor.
It’s simply not possible to DIY a homelab inference server better than the M3+ for inference workloads, at anywhere close to its price point.
They are perfectly positioned to capitalize on the next few years of model architecture developments. No wonder they haven’t bothered working on their own foundation models… they can let the rest of the industry do their work for them, and by the time their Gemini licensing deal expires, they’ll have their pick of the best models to embed with their hardware.
I don't buy the central thesis of the article. We won't be in a supply crunch forever.
However, I do believe that we're at an inflection point where DC hardware is diverging rapidly from consumer compute.
Most consumers are using laptops and laptops are not keeping pace with where the frontier is in a singular compute node. Laptops are increasingly just clients for someone else's compute that you rent, or buy a time slice with your eyeballs, much like smartphones pretty much always have been.
I personally dropped $20k on a high end desktop - 768G of RAM, 96 cores, 96 GB Blackwell GPU - last October, before RAM prices spiked, based on the logic that hardware had moved on but local compute was basically stagnant, and if I wanted to own my computing hardware, I'd better buy something now that will last a while.
This way, my laptop is just a disposable client for my real workstation, a Tailscale connection away, and I'm free to do whatever I like with it.
I could sell the RAM alone now for the price I paid for it.
Between the rounded corners that don't reach the edges of the viewport, and the behavior when opening a new app for the first time, it feels like Mac's UI is optimized around the assumption most users won't expand windows to fill the whole screen, but rather leave them half-sized somewhere in the middle.
Does anyone actually do this? Especially for heavy-duty applications like my web browser and IDE, this has always felt like a bizarre assumption to me.
I agree wholeheartedly! This is exactly what i was thinking the entire time. Like, does this guy think this single woman is responsible for the kafka-esque trap they're both in? Will the 0.5% uptick in toner cost for the year cause the administration to rethink their requirements? He's just taken the immense weight and pain he holds for this process, undeservedly, and placed it upon another undeserving person, then laughed at her anguish.
Yes, life is hard, but surely we can bear our troubles in a way that don't make others harder to bear. Or at least aim your troubles at someone who has any power at all to change things! Find a better way to fight the system, that isn't just stabbing other people trapped in the box with you
To clarify, a good title would be "Loading Doom entirely from DNS records"
Neither one plays Doom over DNS nor is the first paragraph in the README correct, because DNS is only abused for storage, not for computing/processing/executing instructions:
> At some point, a reasonable person asked "DNS resolves names to IP addresses, what else can it do?" The answer, apparently, is run DOOM.
I’m seeing this more and more, where people build this artificial wall you supposedly need to climb to try agentic coding. That’s not the right way to start at all. You should start with a fresh .claude, empty AGENTS.md, zero skills and MCP and learn to operate the thing first.
We all know they're addictive, they're designed to be addictive, and they're very, very harmful, to both adults and children. The individuals who are profiting from the harm are clearly identifiable. And that harm directly targets children. That this is allowed to continue is a symptom of a sick society.
It reads like an indictment of the government employee personally, rather than the rules and constraints that employee is forced to use.
Probably fair to comment on the interaction, whether the person was rude, and so on. But blaming them for not accepting email is kind of silly. They are not empowered to do that kind of thing.
I'm reminded of an article a while back talking about how the change from sodium streetlights to LED streetlights had a whole lot of unforeseen effects on animals, people's sleep patterns, driver awareness and visibility, etc. due to color changes. There was a comment on the article from an old civil engineer saying "no, these were not unforeseen, we actually did the research back in the day to figure out what color the street lights should be, that's why they were the color they were."
Whenever you have this kind of impressions on some development, here are my 2 cents: just think "I'm not the target audience". And that's fine.
The difference between 2ms and 0.2ms might sound unneeded, or even silly to you. But somebody, somewhere, is doing stream processing of TB-sized JSON objects, and they will care. These news are for them.
I disagree with take on Wikipedia or Wikimedia there was a lot of trash talk because they were totally obnoxious with their fundraising.
I donated once to Wikipedia and then I was getting Jimmy Wales in my mailbox basically like everyday.
That actually drove me away from ever wanting to donate to them. Then there was a lot of talking if they really are so much in need of money but that's different topic.
In contrast I donated to LibreOffice and it was perfectly quiet for one time donation and I am happy to donate from time to time as I use LibreOffice for my personal stuff.
Callum here, I was the developer that first discovered and reported the litellm vulnerability on Tuesday.
I’m sharing the transcript of what it was like figuring out what was going on in real time, unedited with only minor redactions.
I didn’t need to recount my thought process after the fact. It’s the very same ones I wrote down to help Claude figure out what was happening.
I’m an ML engineer by trade, so having Claude walk me through exactly who to contact and a step by step guide of time-critical actions felt like a game-changer for non-security researchers.
I'm curious whether the security community thinks more non-specialists finding and reporting vulnerabilities like this is a net positive or a headache?
This might be obvious, but all of those things have a single common denominator: Microsoft, over you, getting to decide what your computer is doing. This is the biggest generalized danger in computing today: That OS (and device) manufacturers have gotten it in their heads that it's OK for them to have a strong say in what your computer runs. User doesn't want X, Y, or Z running on his computer? TOUGH. We are going to run it and make it really hard or impossible for user to turn it off. As a user, I no longer feel like I'm driving the car--I'm just a passenger. "Where do you want to go today?" has turned into "You're going here today, whether you want to or not!"
Generating big chunks of code is rarely what I want from an agent. They really shine for stuff like combing through logs or scanning dozens of source files to explain a test failure. Which benchmark covers that? I want the debugging benchmark that tests mastery of build systems, CLIs, etc.
The PCWorld story is trash and completely omits the key point of the new display technology, which is right in the name: "Oxide." LG has a new low-leakage thin-film transistor[1] for the display backplane.
Simply, this means each pixel can hold its state longer between refreshes. So, the panel can safely drop its refresh rate to 1Hz on static content without losing the image.
Yes, even "copying the same pixels" costs substantial power. There are millions of pixels with many bits each. The frame buffer has to be clocked, data latched onto buses, SERDES'ed over high-speed links to the panel drivers, and used to drive the pixels, all while making heat fighting reactance and resistance of various conductors. Dropping the entire chain to 1Hz is meaningful power savings.