Hacker Newsnew | past | comments | ask | show | jobs | submit | kccqzy's commentslogin

The ads are garbage because Google didn’t want the ads to be hyper optimized and hyper targeted to you. It gives people an uncanny valley feeling. Meta takes a different approach and people often accuse Instagram to snoop on their conversations, even if Instagram is not doing that and is merely good at optimizing the ads. And given that both companies are successful at ads, I’d say both approaches are commercially successful.

Can't they at least target the ads to the specific content they're played on?

That’s usually done not on the network side but through the device itself. Think MDM and endpoint management.

A good solution is tackling it on both. At work we have network level firewalls with separate policies for internal and guest networks, and our managed PCs sync a filter policy as well (through primarily for when those devices are not on our network). The network level is more efficient, easier to manage and troubleshoot, and works on appliances, rogue hardware, and other things that happen not to have client management.

Well, if you have MDM you should be able to just disable ECH.

This is also indeed done on both. Browser policies.

It’s still terrible. There was a brief period immediately after Heartbleed that it was rapidly improving but the entire OpenSSL 3 was a huge disappointment to anyone who cared about performance and complexity and developer experience (ergonomics). Core operations in OpenSSL 3 are still much much slower than in OpenSSL 1.1.1.

The HAProxy people wrote a very good blog post on the state of SSL stacks: https://www.haproxy.com/blog/state-of-ssl-stacks And the Python cryptography people wrote an even more damning indictment: https://cryptography.io/en/latest/statements/state-of-openss...

Here are some juicy quotes:

> With OpenSSL 3.0, an important goal was apparently to make the library much more dynamic, with a lot of previously constant elements (e.g., algorithm identifiers, etc.) becoming dynamic and having to be looked up in a list instead of being fixed at compile-time. Since the new design allows anyone to update that list at runtime, locks were placed everywhere when accessing the list to ensure consistency.

> After everything imaginable was done, the performance of OpenSSL 3.x remains highly inferior to that of OpenSSL 1.1.1. The ratio is hard to predict, as it depends heavily on the workload, but losses from 10% to 99% were reported.

> OpenSSL 3 started the process of substantially changing its APIs — it introduced OSSL_PARAM and has been using those for all new API surfaces (including those for post-quantum cryptographic algorithms). In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. This reduces performance, reduces compile-time verification, increases verbosity, and makes code less readable.


Wow, also this:

> The OpenSSL project does not sufficiently prioritize testing. [... ]the project was [...] reliant on the community to report regressions experienced during the extended alpha and beta period [...], because their own tests were insufficient to catch unintended real-world breakages. Despite the known gaps in OpenSSL’s test coverage, it’s still common for bug fixes to land without an accompanying regression test.

I don't know anything about these libraries, but this makes their process sound pretty bad.


This quote about testing is way worse:

> OpenSSL’s CI is exceptionally flaky, and the OpenSSL project has grown to tolerate this flakiness, which masks serious bugs. OpenSSL 3.0.4 contained a critical buffer overflow in the RSA implementation on AVX-512-capable CPUs. This bug was actually caught by CI — but because the crash only occurred when the CI runner happened to have an AVX-512 CPU (not all did), the failures were apparently dismissed as flakiness.


OpenSSL is (famously) an extremely terrible codebase.

It's likely that over the long-term the tech industry will replace it with something else, but for now there's too much infrastructure relying on it.


  > In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. 
Ah yes, the ole' " fn(args: Map<String, Any>)" approach. Highly auditable, and Very Safe.

I think one of the main motivators was supporting the new module framework that replaced engines. The FIPS module specifically is OpenSSL's gravy train, and at the time the FIPS certification and compliance mandate effectively required the ability to maintain ABI compatibility of a compiled FIPS module across multiple major OpenSSL releases, so end users could easily upgrade OpenSSL for bug fixes and otherwise stay current. But OpenSSL also didn't want that ability to inhibit evolution of its internal and external APIs and ABIs.

Though, while the binary certification issue nominally remains, there's much more wiggle room today when it comes to compliance and auditing. You can typically maintain compliance when using modules built from updated sources of a previously certified module, and which are in the pipeline for re-certification. So the ABI dilemma is arguably less onerous today than it was when the OSSL_PARAM architecture took shape. Today, like with Go, you can lean on process, i.e. constant cycling of the implementation through the certification pipeline, more than technical solutions. The real unforced error was committing to OSSL_PARAMs for the public application APIs, letting the backend design choices (flexibility, etc) bleed through to the frontend. The temptation is understandable, but the ergonomics are horrible. I think performance problems are less a consequence of OSSL_PARAMS, per se, but about the architecture of state management between the library and module contexts.


This is a hilarious, and also terrible, reason.

Why can't we let the FIPS people play in their own weird corner, while not compromising whole internet security for their sake? OpenSSL is too important to get distracted by a weird US-specific security standard. I'm not convinced FIPS is a path to actual computer security. Ah well it's the way the world goes I suppose.


Fair, but from the user side it still hurts. Setting up an Ed25519 signing context used to be maybe ten lines. Now you're constructing OSSL_PARAM arrays, looking up providers by string name, and hoping you got the key type right because nothing checks at compile time.

Yeah. Some of the more complex EVP interfaces from before and around the time of the forks had design flaws, and with PQC that problem is only going to grow. Capturing the semantics of complex modes is difficult, and maybe that figured into motivations. But OSSL_PARAMs on the frontend feels more like a punt than a solution, and to maintain API compatibility you still end up with all the same cruft in both the library and application, it's just more opaque and confusing figuring out which textual parameter names to use and not use, when to refactor, etc. You can't tag a string parameter key with __attribute__((deprecated)). With the module interface decoupled, and faster release cadence, exploring and iterating more strongly typed and structured EVP interfaces should be easier, I would think. That's what the forks seem to do. There are incompatibilities across BoringSSL, libressl, etc, but also cross pollination and communication, and over time interfaces are refined and unified.

Sensible way would be dropping FIPS security threathre entirely and let it rot in the stupid corner companies dug themselves into, but of course the problem is OpenSSL's main income source...

I really wish Linux Foundation or some other big OSS founded complete replacement of it, then just write a shim that translates ABI calls from this to openssl 1.1 lookalike


There are little other options. `Ring` is not for production use. WolfSSL lags behind in features a bit. BoringSSL and AWS-LC are the best we have.

BoringSSL has an unstable API, and Google specifically recommends against using it[1].

AWS-LC is ok, but afaict there aren't really any pre-built binaries available, and you need to compile it yourself, and is a little difficult to use if you aren't using c/c++ or rust. (The same is largely true of boringssl).

[1]: https://github.com/google/boringssl?tab=readme-ov-file#borin...


This is incredible, and damning. What do the OpenSSL maintainers say in response to these criticisms?

I just read the Reddit post by their developer and my takeaway is that they have a very good understanding of “unlimited” really means. It’s not a shenanigan. It’s just calculated risk. It’s clear to me that they simultaneously intend to offer truly unlimited backups while hoping that what the average user backs up is within a certain limit that they can easily predict and plan for. It’s a statistical game that they are prepared to play.

> It’s a statistical game that they are prepared to play.

I understand this, many others do too, the only difference seems to be that we're not willing to play those games. Others are, and that's OK, just giving my point of view which I know is shared by many others who are bit stricter about where we host our backups. Instead of "statistical games" we prefer "upfront limitations", as one example.


The problem is you have to play with them - and sure, maybe they're willing to be the Costco to the unlimited backup's $1.50 hotdog - but for how long? Will their dedication to unlimited and particular price points mean you have to take Pepsi for awhile instead of Coke, or that your polish sausage dog disappears? Wait, where did the analogy go? I'm hungry.

It's a bit safer when you know your playbook - if there was unlimited (as it is now) and unlimited plus (where they backup "cloud storage cached files") and unlimited pro max premier (where they backup entire cloud storages) you'd at least know where you stand, and you'd change "holy shit my important file I though was backed up isn't and now it's gone forever" to "I have to pay $10 a more a month or take on this risk".


> You can build complex grades in a node-based workflow that goes far beyond the layer-based approach of conventional photo applications

As someone who has only used layer-based approaches can someone elucidate on why node-based workflows are more powerful? I still remember the first time I discovered layers in professional photo editing applications and I was blown away by how powerful this was.


In a nutshell, nodes enable arbitrary programming. This is one of the big success stories for visual programming. Nothing would stop you from doing all that in a text programming language but there's definitely an appeal to the graphical layout when you have modules getting input from half-a-dozen different sources and then outputting to just as many.

Graph edits vs. linear editing: with layers you just get layers that go on top of each other. You can't separately take the input image, apply two separate changes to it, and then mix those changes back into a single image. And no amount of masking will help: unless your masks never overlap, layers literally can't do what a graph can do.

Practical example: I have a bird that's being chased by another bird, and they overlap in the shot. There's weird lighting on the bird that's further away, so I need to grade them differently. But they overlap so now I have a challenge. I could try to do this using layers and masks: mask both birds in a way that the masks don't overlap, while perfectly tweaking the mask feathering so that there's minimal bleed on their overlap, then tie each mask to an adjustment layer.

But if I have graph based adjustments available, I first split my input into separate nodes for the background and each bird, then for each of those, I can send them through a node that masks them appropriately without worrying about mask overlap. I can then chain adjustment nodes to grade all three and I can save those grades separately, too so I can use them on other shots from the same series, then I can send each chain into a muxer that turns the three elements back into a single composition.

I could do that with layers, where I clone the full image several times, create my adjustments in groups, then render each group to a new layer, hide everything else, and mix those layers, but now what do I do if I want to tweak the grading? Delete my layer, unhide the group, tweak the adjustments, rerender the group, mix the new "final" layer in, and holy crap how many things I did I just need to do that weren't "making my adjustments"? Whereas with a node graph you just make your adjustment. Done, your change simply cascades through the graph.

There's a lot that you can do with layers, but layers are just a linear graph: you can do more if you can branch and merge your graph.


> very few user would really care about this difference

Oh the user absolutely does if that user creates lots of branches and the branches are stacked on top of each other.

I get your feeling though; sometimes in my own private repositories I don’t bother creating branches at all. Then in this case jj doesn’t really make much of a difference.


The first time I got into Emacs and vim I also spent way too much time on the editor customization spiral. Then in 2015 I just picked and settled on Spacemacs while strictly limiting how much time I spend on customizing my editor. I’ve had three jobs since then and I brought basically the same editor config to all three jobs.

> At this point, Microsoft is walking a tightrope. It cannot appease everyone since it also has its shareholders and investors to think about, but then there's also a rather large Windows 11 user base which really is fed up of AI experiences being shoved down its throats.

Are shareholders and investors stupid enough to think that AI hated by users is still desirable?


> Look: there are better canyons. There are better canyons just as accessible as the Grand Canyon, just as nice to look at, and much more interesting to actually exist in. Go to Bryce Canyon. Go to Zion Canyon (in the off season). Go to the Black Canyon of the Gunnison. Go to Canyonlands!

I totally agree. Canyonlands is in my opinion the single most amazing national park. Parts of it is hard to get to, but even locations readily reachable in a car has amazing views that change. And there are basically no crowds.

Bryce Canyon has good hikes but the fact that NPS runs a bus in the park tells you about the crowding situation. It’s still good if you don’t mind crowds.

Zion is also not bad but the crowds are worse than Bryce Canyon. The mile or so of the Virgin River is like a manmade water park.


I have friends that do that and it’s intentional. Had a good time at a store or restaurant? Take a selfie and upload to Google Maps. Also take a selfie video and upload to Instagram stories. It’s a way of life that defaults to more sharing.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: