Hacker Newsnew | past | comments | ask | show | jobs | submit | onion2k's commentslogin

I use Codex at home and Opus at work. They're both brilliant.

I'm an Engineering Manager, and I think I have a similar role just applied to people processes rather than code. One nuance though - a lot of the time I suspect it's deliberate complexity designed to obfuscate how little people actually do.

Thats projection.

Well, maybe. It's projection, because I certainly don't make simple processes myself a lot of the time, but I do try to optimize them afterwards. I have a few decades of seeing people implement processes than I've had to use, and then had to simplify as I moved into more senior roles. I've had people push back quite forcefully when I've pointed out they do things like writing reports that no one reads or gathering data that teams ignore. People often fight for added complexity because their perception is that it's important, and that means they must be important because they're the one in control of it.

There is an element of projection because there is in most things people talk about; I'm speaking about this through my filters and biases after all. But it's grounded in a fair chunk of experience.


Maybe you are saying the same thing, but couldn't that be explained better by those people being afraid to be made obsolete? Or at least, afraid if having to retrain?

This isn't new. I've seen it for decades, including in situations where no one is at risk. I don't think it's often a fear thing.

I built my own photo viewer for OSX entirely because Finder doesn't have an 'actual size' option. OSX is pretty terrible for image management.

Wow. Thats cool. I would love to try it out.

I gave Codex 5.4 Playwright MCP access to the site and a prompt of "Use Playwright CLI Skill to open https://playstarfling.com/ and load the game. Work out how to play it, and devise a strategy to win." After a about half a dozen attempts it had figured the game out. Then I prompted it to "Score as much as you can." It wrote itself an auto-play script that just keeps going.

I stopped it running at 10866. That's currently the high score. I appreciate that this is pointless and proves nothing, but I've been experimenting with automating testing games (I work at a gaming company at the moment) so it felt like an opportunity to try an experiment.


Do tell? How did it play the game, did you watch? Just took forever with every shot, or how did that play out with the LLM induced latency?

It didn't exactly play it using the LLM, but it used Playwright to execute code in the browser to work out how it works and then wrote a script to inject into the page to play it. It was basically perfect AI getting skip * 2 on every shot even after a hundred planets. I didn't expect it to do quite so well with only 2 prompts.

Are you sure the script is actually testing the gameplay? Given it can see the entire source code of the game.

It started out using browser.click events and then switched to using browser.evaluate script injection. That's entirely valid for my use case.

Surely it's reasonable to assume that a company doing some dubious 'marketing intelligence' scraping of people's data from a Chrome plugin is going to both inflate the numbers they put in offers and try to scam their way out of paying if you actually accept. I wouldn't consider them real offers. They're marketing. The real world payments, if you get them, would be lower.

Technically you're correct that change frequency doesn't necessarily mean dead, but the number of projects that are receiving very few updates because they're 'done' is a fraction of a fraction of a percent compared to the number that are just plain dead. I'm certain you can use change frequency as a proxy and never be wrong.

> I'm certain you can use change frequency as a proxy and never be wrong.

I (largely) wrote a corporate application 8 years ago, with 2 others. There was one change 2 years ago from another dev.

Lots of programs are functionally done in a relatively short amount of time.

"Accelerating or Dying" sounds like private equity's lazy way to describe opportunity, not as a metric to describe software.


That sort of project exists in an ocean of abandoned and dead projects though. For every app that's finished and getting one update every few years there are thousands of projects that are utterly broken and undeployable, or abandoned on Github in an unfinished state, or sitting on someone's HDD never be to touched again. Assuming a low change frequency is a proxy for 'dead' is almost always correct, to the extent that it's a reasonable proxy for dead.

I know people win the lottery every week, but I also believe that buying a lottery ticket is essentially the same as losing. It's the same principle.


With respect, this is a myopic view. Not all software is an "app" or a monolith. If you use a terminal, you are directly using many utilities that by this metric are considered dying or dead.

It’s too slow for the scale of pretraining.

There isn't really such a thing as 'too slow' as an objective fact though. It depends on how much patience and money for electricity you have. In AI image gen circles I see people complaining if a model takes more than 5s to generate an image, and other people on very limited hardware who happily wait half an hour per image. It's hard to make a judgement call about what 'too slow' means. It's quite subjective.


If it would take so long to train that the model will be obsolete before the training is finished that might be considered too long. With ML you can definitely hit a point where it is too slow for any practical purpose.

Obsolete because of what? Because with limited hardware you’re never aiming for state of the art, and for fine-tuning, you don’t steer for too long anyway.

Because there is a new model that is better, faster, more refined, etc...

If your training time is measured in years or decades it probably won't be practical.


That’s just playing semantics. Nobody is talking about, “objective facts” or need define them here. If the step time is measured in days, and your model takes years to train, then it will never get trained to completion on consumer hardware (the entire point).

Creator-led channels are competing with AI-generated video channels that pump out many videos every day. The ad spend hasn't increased but now it's shared with people who have automated their channel's content production and who are likely getting the majority of what's available.

Usenet was great in the late 90s and early 2000s. I posted a lot, and met some great people. I got a job doing tech review of books about WAP and WML from my posts in a group about the forerunner to mobile internet, and another job with a company making intranet software from some posts about ASP and vbscript. I've no idea where I'd go for that sort of forum today.

It works the same way here in the UK. Some companies ask your previous salary, and sometimes check your references, and sometimes your previous employer will disclose what your salary was. If it turns out you lied nothing bad happens, but you've just given your new employer a reason to dismiss you.

The main problem with the UK system is that it means that if you were underpaid before you're likely to continue to be underpaid in your next role (if you accept a low salary again). For that reason when I'm hiring I've stopped asking for someone's previous salary, and just ask them what they want instead. If it's in the right ballpark everyone's happy. If they lowball themselves I ask why and usually get "That's x% more than I'm on now.", which leads to a conversation about how they're underpaid and should be asking for more. If they ask for too much then I just don't hire them because I can't afford them.

There's a new law coming in where companies have to disclose salary bands now, which at least means people will understand the bottom end. That's going to make the salary negotiation part of hiring a lot easier.


> For that reason when I'm hiring I've stopped asking for someone's previous salary, and just ask them what they want instead.

Why don't you post what you're paying in the job ad/offer? Some people even skip ads without a salary or a salary range because of all the uncertainty. As a potential employee somewhere, you've obviously already calculated a range or a fixed number - so why ask the employee?


Everyone just says the top of the range.

> and sometimes your previous employer will disclose what your salary was

How is this even legal?


I suspect it's not, but it's also hard to prove something has occurred in private communication between third parties.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: