Hacker Newsnew | past | comments | ask | show | jobs | submit | PhilippGille's commentslogin

> it is possible with some software to have everything massively cached, with the cloud doing that, with the origin server in my basement, only accessible from the allowed cache arrangement

Do you mean a setup like:

    client -> cloud(HAProxy+Varnish) -WireGuard-> basement(backend)
Or something else?

If you are interested in scale models of New York, there's a 1:1 scale model in Minecraft: https://youtu.be/ZouSJWXFBPk

Incredible effort… thanks for sharing this!

I’d love to learn more about the technical challenges. For example, how do they handle buildings that aren’t perfectly aligned to the cardinal axes?


Whoa. I admire the time and dedication to both models. However, I can't help but LOVE the minecraft model since it will live on. Now we just need to 3D print the minecraft model :D

Wow, that is phenomenal. I don't play Minecraft, but my kids do- so I can see them and can appreciate all the work that went into this. Fair play!

Is there a way to visit this online, without having to download and install software locally?


Thank you. Seems to be down, though; web dev tools indicate unable to open websocket to wss://progressbackend.minefact.de/socket.io/?EIO=4&transport=websocket&sid=...

The article tries to sell it to people who can't run Docker locally (e.g. locked down permissions in enterprise environments, slow old laptop), but hasn't it already been possible to use remote Docker engines?

So the news is that they're offering to host those remotes now, right?


Nah. It's just 15 years later they finally try to find a niche would also bring them monies. There are a lot of business who would just offload (yes, I did it too) the burden of compliance to a 3rd party - and this is the reason it's mentioned quite prominently there.

Good for them but they should had done this ten years ago.


They said bind mounts would still work. I didn't think those worked with remote engines.

Which also seems to imply the client software will expose your laptops filesystem to wherever docker is hosting the serverside piece of Offload.


Hopefully only the bound folders.

Is this not just about extra credit? So what's included in the subscription doesn't change - just extra credits are now token based instead of message based? (For Plus/Pro)

God every single title I read about AI on this site ends up being a straight up lie.

I think this might also impact how usage is calculated for subscription plans as well, not just overages (using tokens instead of messages for calculating usage). But the message from OpenAI seems vague.

I miss “BREAKING NEWS” as it is used at X /s

Yes.

> This format replaces average per-message estimates with a direct mapping between token usage and credits.

It's to replace the opaque, per-message calculation, not the subscription plan.


It does feel like also impact the usage meter for subscription plans?

Usage meter has always been completely opaque anyway. They could (and probably did) shrink the limit whenever they like.

Ostensibly this makes usage meter rate changes more transparent?

It is a bit insidious that the price hike coincide with the end of 2x promotion, which makes the usage change a bit more obscure.

It's not a price hike, it's actually making it easier to understand relative usage for different models/features.

I have no idea what I’m getting for $200/mo at this point. Maybe that’s on me, idk.

Fair point. We only have clear evidence they're being more transparent about credit pricing and value, but it's unclear whether that'll make people burn through usage faster or slower.

The fuzziness is intentional. It gives them wiggle room and obscures how much "value" you actually get from $200, a 5-hour block, or a week. That keeps the tension manageable between subscription pricing and pay-per-token API pricing, especially for larger businesses on enterprise plans who want transparent $-per-MTok rates.

If they were fully transparent, like "your $200 sub gets you up to $2,000 of equivalent API usage," it would be a constant fight. People would track pennies and scream any time 5-hour blocks got throttled during peak hours. Businesses would push harder for pay-per-token discounts seeing that juicy $200 sub value.


I have no idea what I'm getting for $20/mo, either. (But I do know that it's at least $180 less than what I could be spending, I suppose.)

Looks like it yes. I can't believe I have to scroll this far too find this comment.

https://help.openai.com/en/articles/12642688-using-credits-f...


The OpenRouter usage stats indicate the opposite: https://openrouter.ai/rankings?view=month

OpenRouter usage is likely skewed towards LLMs that are more niche and/or self-hostable by solid hardware that's available, but most consumers don't have on hand. I can imagine Anthropic and OpenAI LLMs often get called directly from their APIs instead.

At least from my experience and friends of mine, we use OpenRouter for cases where we want to use smaller LLMs like Qwen, but when I've used ChatGPT and Claude, I use those APIs directly.


Same, and my little SaaS is pushing more than 0.1% of the TOTAL volume of tokens on OpenRouter, so the reality is they’re TINY.

0.1% of OpenRouter is around 400 billion tokens per month or around $400k per month at a cost of $1 per 1 million input tokens, not counting output.

I think it's pretty disingenuous to call your SaaS little when it is projected to spend at least 5 million USD just on tokens and this is a low end estimate.


Their homepage says 30T tokens monthly, so 0.1% would be 30 billion.

And I pay way less than $1 per input token, especially when caching is taken into account.

EDIT: they updated it in the last day or two, now it says 70T, so I’m a little below 0.1% now. But seriously, the point stands, 70T tokens a month just isn’t that much in the global scheme. The big labs are pushing quadrillions each.


Ill sell you tokens for just 1 cent each, as many as you want. Bargain!

I use ChatGPT and Claude on OpenRouter, because it's just easier than buying credits on each platform separately.

what happened around jan this year(26) that caused such a climb in usage?

Openclaw

This is their MBP 14" M5 Max review, with a "Battery life" section and their standard web browsing test: https://www.notebookcheck.net/M5-Max-with-inconsistent-perfo...

15h 10min


Thank you for finding that. That seems like a much more plausible comparison.

Edit: wait, that’s the M5 Max. What about the stock M5?


The article does list what Tailscale adds on top of WireGuard:

> WireGuard by itself is mostly the data plane. Tailscale adds the control plane on top: identity/SSO, peer discovery, NAT traversal coordination, ACL distribution, route distribution (including exit node default routes), MagicDNS, and fast device revocation.


I think you missed the point. There's nothing in the article going into any of why this would help differentiate Tailscale from plain-old-Wireguard. Simply saying this and moving on is not that.

In the cryptocurrency world this has existed many years already. For example with the Lightning network on top of Bitcoin it has always been easy to let the server generate an invoice, which the client pays and then the client sends another request including cryptographic proof of the payment. On layer 2 it was always cheap and fast.

For example I created this Go middleware at the time: https://github.com/philippgille/ln-paywall#how-it-works (currently defunct)

Similar projects implemented that into standalone API gateways.

All using status code 402 already.

Here Stripe's example is using USDC, so still crypto BTW.



You mean (229 points, 5 hours ago, 190 comments) https://news.ycombinator.com/item?id=47506251


On Kagi you can increase/decrease a domain's ranking for your personal search results, and they make the aggregated stats public, showing for example Pinterest as the most blocked site, which matches part of what you're looking for: https://kagi.com/stats?stat=insights


I hope whoever is running Pinterest sees they are the top 7 most blocked sites.


...and that whoever is running HN sees that they are the #5 raised, and #5 pinned site.


I'm hoping/dreaming it would be browser-standard, as a protocol. Kagi would be one of the reputation providers in that scheme.


Funny to see w3schools.com ranking above Twitter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: