Hacker Newsnew | past | comments | ask | show | jobs | submit | Vt71fcAqt7's commentslogin

I'm glad this happened, although I would have prefered if the result came from a new law eg. the Open App Markets Act rather than have to rely on what is or is not legally considered a market in terms of the sherman act etc.


I vaguely remember reading comments here that said you can get rate limited on R2 without warning if egress is too high. Was that true and is that still true? What is the limit if so?

I tried looking for that thread again and I only found the exact opposite comment from the Cloudflare founder:

>Not abuse. Thanks for being a customer. Bandwidth at scale is effectively free.[0]

I distinctly remember such a thread though.

Edit: I did find these but neither are what I remember:

https://news.ycombinator.com/item?id=42263554

https://news.ycombinator.com/item?id=33337183

[0] https://news.ycombinator.com/item?id=38124676


The title seems to be editorialized? The title I see is "US appeals court rejects copyrights for AI-generated art lacking 'human' creator"


Maybe it was deliberately trimmed - HN titles have a length limit.


It's temporal anti-aliasing.

>Temporal anti-aliasing is another form of super-sampling, but instead of downscaling from a much larger image, data from prior frames is reprojected into the current one.

https://www.eurogamer.net/digitalfoundry-2024-temporal-anti-...


>reviewers apparently approved that.

What reviewers?


>And AI--especially LLMs--are notoriously bad at the "correct" part of translation.

Can't you just compare the compiled binaries to see if they are the same? Is the issue that you don't have the full toolchain so there are different outputs from the two compilers? Thinking about it though you could probably figure out which compiler was used using those same differences though..


It can take quite a bit of engineering just to get the same source to produce the same results in many C or C++ toolchains. "Reproducible builds" require work; all sorts of trivial things like the length of pathnames can perturb the results. Not to mention having to have the same optimizer flags.

"Do these two binaries always behave the same for the same inputs" is practically an unsolvable problem in general. You can get fairly close with something like AFL (American fuzzy lop, a fuzzer and also a type of rabbit).

(Someone should really make an LLM bot that scans HN for instances of "just" and explain why you can't just do that, it's such a red flag word)


The expected outcome of using a LLM to decompile is a binary that is so wildly different from the original that they cannot even be compared.

If you only make mistakes very rarely and in places that don't cause cascading analysis mistakes, you can recover. But if you keep making mistakes all over the place and vastly misjudge the structure of the program over and over, the entire output is garbage.


That makes sense. So it can work for small functions but not an entire codebase which is the goal. Does that sound correct? If so, is it useful for small functions (like, let's say I identify some sections of code I think are important becuase they modify some memory location) or is this not useful?


There are lots of parts of analysis that really matter for readability but aren't used as inputs to other analysis phases and thus mistakes are okay.

Things like function and variable names. Letting an LLM pick them would be perfectly fine, as long as you make sure the names are valid and not duplicates before outputting the final code.

Or if there are several ways to display some really weird control flow structures, letting an LLM pick which to do would be fine.

Same for deciding what code goes in which files and what the filenames should be.

Letting the LLM comment the code as it comes out would work too, as if the comments are misleading you can just ignore or remove them.


No, but for verifying equivalence you could use some symbolic approach that is provably correct. The LLM could help there by making its output verifiable.


Program equivalence is undecidable, in general, but also in practice (in my experience) most interesting cases quickly escalate to require an unreasonable amount of compute. Personally, I think it is easier to produce correct-by-construction decompilation by applying sequences of known-correct transformations, rather than trying to reconstruct correctness a posteriori. So perhaps the LLM could produce such sequence of transforms rather than outputting the final decompiled program only.


Yes, something like this, the intermediate verified steps wouldn't have to be shown to the user.


I suggest putting the link to the website in the about section on github. Currently it reads "No description, website or topics provided." Saves users from scrolling a bit.


Interesting. Could you give a brief description of how you got that number? Eg. what factors were considered.


Those numbers match what comes up with a quick search:

https://www.extension.iastate.edu/grain/topics/EstimatesofTo...

That study uses 1,043.4 mpg for the fuel economy of a 100,000 dwt ship.

Videos of transportation ship engines are cool. Each cylinder is wide enough for a person to lay down inside it.

https://youtu.be/G0eMyA388bE


Google cloud run and Azure container apps both let you run an arbitrary docker image without having to deal with custom setups. Both scale automatically so they are serverless. AWS has apprunner but it doesn't scale to zero.[0]

[0] https://github.com/aws/apprunner-roadmap/issues/9 (amusingly the issue OP posts on HN)


Lambda does as well. You can even use their runtime interface client to run your function within the same wrapped that Lambda uses irl


Can I upload my web server as a docker to Lambda and have it run forever there? I though Lambdas were supposed to be more short lived (like a couple hours), is that not the case? It's been a while since I actually looked at Lambda because GCP run is so clean.


If you're married to running your own Web server, you could use schedules to run your Lambda every 15 mins....but you might as well use Fargate at that point (easier and probably cheaper).

If you're not married to running your own Web server, you can use API gateway with your Lambda functions, which is the traditional approach. Your frontend and it's assets can be served from an S3 bucket with Lambdas powering your backend. Cold start times will be a concern with this approach, but there are "warm-up" strategies you can employ to prevent that.


There is also knative, which cloud run is based on


>Something I’m still having trouble believing is that complex workflows are going to move to e.g. AWS Lambda rather than stateless containers orchestrated by e.g. Amazon EKS. I think 0-1 it makes sense, but operating/scaling efficiently seems hard. […]

This isn't really saying anything about serverless though. The issue here is not with serverless but that Lambda wants you to break up your server into multiple smaller functions. Google cloud run[0] let's you simply upload a Dockerfile and it will run it for you and deal with scalling (including scaling to zero).

[0] https://cloud.google.com/run


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: