Hacker Newsnew | past | comments | ask | show | jobs | submit | benzor's commentslogin

It's indeed sad to see. Game networking techniques have been known for multiple decades and yet still haven't made their into any of the major publicly available engines (notably Unreal and Unity). What gives?

As a game dev of 10+ years myself, I have a few theories:

1. Multiplayer networking is the "secret sauce" that creates moats/barriers to entry for incumbent studios. Think Rocket League or Fortnite or StarCraft 2. The tech exists, it's functionally well understood, but hard to implement at a high level of quality. Why not keep it to yourself? There is more money in holding onto your game revenue monopoly than trying to sell the network tech.

2. Game genres differ greatly in their networking needs. Rollback is great for fighting games but that's about it. StarCraft uses delay based deterministic lockstep. FPS games use (typically) ad-hoc server authoritative state syncs with client side prediction. Other complex games use full determinism with rollback, sending only inputs over the wire. Some cheapo indie games using client authoritative models (open to cheating but easy to implement, arguably fine for coop games). There are even more variants and blended approaches but you get the picture: there is no one size fits all approach and many are mutually exclusive so it's harder to package into an engine as a comprehensive solution.

3. Networking a game properly involves very leaky abstractions. It is impossible to write gameplay code for a networked game without understanding the nuance of the network model. This makes it substantially harder to develop the game, and this hurts the major game engines' marketability, with both major players Unity and Unreal guilty of selling themselves as "look ma no code required" solutions. Similar to point 1, not worth the money to sell this.

I don't see a great way out of this unfortunately. The only good networking middleware I know of is Photon and they're not exactly an easy to use product either. Hopefully we see better open source tooling in the future.


I agree. Sell the physically based rendering sizzle, keep the multiplayer networking sauce for yourself.

I don't buy the different genres differ greatly in their networking needs, though. Dota 2 uses the same networking architecture as Counter-Strike. Both of those games use the same underlying networking techniques as RuneScape 2 (Old School Runescape). Those are a MOBA, FPS, and an MMORPG respectively.

You use lag compensation (now people refer to it as rollback) in FPS games too. Yahn famously talked about it in his paper.

In fact, any type of granular timing-dependent gameplay requires lag compensation, otherwise you end up with situations like players leading their shots or missing entirely (Halo 1 PC).

In Halo 1 PC, they actually broadcast a hit sound from the server if you actually did make bullet contact. And those weren't trace-based (hitscan) either. Halo 1 bullets actually had travel time. So, Bungie basically didn't do lag compensation correctly, and hacked a solution on top that gave players feedback to help smooth over multiplayer. Because bullets had travel time, too, it just felt natural. But make no mistake. It was still wrong, because when bullets landed on your client, they didn't land on your target on the server. You might have hit the player on the server, heard the feedback, but saw dirt fly up from the ground.

Basically, all real-time games have the same networking requirements.

If you made a 2D chess game today, you'd still have a game loop and networking would be a part of it, sending payloads probably only when players took their turns. That same basic design applies to FPS games.

Chess doesn't need client-side prediction, but as soon as you want to allow other players to see you moving pieces around based on cursor position, you're sending real-time data by payload over a time stepped game loop.

You just don't need to predict anything, nor do you need to interpolate or anything else.


Double Stallion Games | Senior Systems Programmer | Montreal, QC, Canada | Fully-Remote/Hybrid/On-Site, your choice | Full-Time | http://dblstallion.com/

We're a small (~25 people) independent games studio, currently developing CONV/RGENCE: A League of Legends Story, in partnership with Riot Forge. We are looking to hire a senior developer focusing on systems, tools, asset pipelines, and networking tech to help develop our next unnannounced title. We are currently a Unity/C# shop but switching to Unreal is on the table, and regardless of your specific expertise if you have a good amount of gamedev experience we are interested.

Some of the many perks of joining our team: 4 weeks vacation minimum, flexible hours, all working modes supported (WFH or on-site or hybrid), zero overtime, full health insurance, a dynamic team with a no-bullshit culture, work directly with the founders and contribute high-impact work, plenty of career growth opportunities as our team scales up, etc.

To apply, please visit; https://apply.workable.com/dblstallion/j/B9500DC831/ (or send me an email directly)


It's really quite sad that the regulation of autonomous vehicles been so slow to come along. Public roads are filled with other drivers, passengers, and pedestrians that did not consent to be a part of a large scale beta test for partial driving automation that could fail at any time. I believe this is a case where self driving software should be default illegal until proven safe. Most companies in this industry, thankfully, seem to be moving carefully and rolling out their products conservatively; Tesla seems to think "move fast and break things" is an appropriate motto for 5000 lbs projectiles on public roads.


I partly agree, but it's also sad that we need to add to the bureaucracy just to insist that professionals act professionally. As you point out, most companies are doing fine. But it wasn't just Tesla playing fast and loose; as far as I'm concerned Uber execs should be doing time for negligent homicide: https://www.npr.org/2019/11/07/777438412/feds-say-self-drivi...


> it's also sad that we need to add to the bureaucracy just to insist that professionals act professionally

That's how bureaucracy and laws form. Individuals (people or companies) do X. The rest of society doesn't like it. They outlaw or regulate it. That's why the rivers no longer catch on fire in the US.


Sure, but I'll note the other companies are self-regulating on this. Regulation tends to slow progress, so what I'd rather see is what places like Waymo are doing: acting like responsible adults so regulation isn't forced upon the whole industry prematurely.


Tesla is the big fish here. If Uber has one instance, then given the size of their program, Tesla may have hundreds.


Thing is, it wasn't just one incident, it was just one incident that resulted in a death.

When Ubers started self-driving it took just a few hours before there were videos on twitter and youtube of them driving right though red lights without a care in the world.


Uber: 1 death and halted the program

Tesla: many deaths, no halt, no improvement in the program. See: phantom braking


Let's not give them too much credit here. I think Uber's halt had more to do with Uber's change in CEO and the indictment of the guy who ran their self-driving car program. Plus the fact that it was a giant money sink with no short-term return being run in a company that has never been profitable and can no longer raise infinite investor money.


I don't think it matters whether or not you think Uber was acting responsibly or reflexively. The point is their program is done while Tesla's, which has seen far more fatalities, continues unabated.


It may not matter to you, but it definitely matters to me and to my point about professionals acting professionally. Uber is not a good example of responsible self-regulation; it's instead about them getting reined in by other circumstances.


I'd expect the FTC to get involved before NHTSA. Tesla's marketing is fraudulent.


If every driver on the road today was never drunk/distracted/enraged I would agree, but the reality is that humans driving cars kill other people every single day. We should fix this with a better system. Tesla and Waymo seem to be making progress. I don't expect them to be perfect but in the long run this will save lives.


You seem to be implying that in the short run it's ok for them to kill some extra people. One, I don't think that's necessary; Waymo is a good example of how a safer approach is also apparently no less effective. Two, you're presuming that we will get to self-driving cars that are economically viable and safer than current human-driven ones, something that is not a given. And three, it's not clear to me who gets to decide exactly how many unwilling people should be sacrificed on the altar of technological progress, but I hope it's not us and it sure shouldn't be Musk.


https://www.youtube.com/watch?v=zdKCQKBvH-A

Here's a video of Waymo car in Phoenix. Around min. 17 it gets confused by a cone and stops in the middle of two lane road, hugging both lanes.

Then google sends support but just when they arrive, the car drives away from them.

They were lucky it didn't lead to an accident and as long as they keep the fleet to 600 cars then yeah, the accident rate will be much lower than Tesla's AutoPilot, shipped in more than a million cars.

My point is not to rag on Waymo just to inject some reality.

We don't have a choice between "safe and unsafe" way of developing self-driving software.

We have a choice between "test software we know can't handle all situation on real roads and make it better based on that testing" or "we'll never have self-driving software".

Except the other option will rather be: "U.S. doesn't allow testing of self-driving software on real roads and a Chinese company will develop it and will capture a trillion-dollar market in U.S."


The Waymo car in that video drove extremely safely give that it was confused. It was conservative and thought it might have seen and obstacle so it stopped. Did this inconvenience other drivers? Absolutely. But it was not a major safety risk. In fact, slowly coming to a stop is the legal and correct thing for a confused or impaired human driver to do. In comparison, Teslas seem to rapidly and suddenly brake for no explainable reason while traveling at fast speeds and do so routinely. Further, Teslas have other safety issues which are indicative of sloppy design (such as the fly-by-wire passenger doors that will trap back sheet occupants in the car if the electronic system is disabled). This is a failure of Tesla specifically, and regulating to stop it wouldn’t really slow down others like Waymo.


> ”Teslas seem to rapidly and suddenly brake for no explainable reason”

The reason is widely known. Phantom braking is caused by rogue radar reflections that confuse the car into thinking there’s an obstruction in its path, activating the AEB automatic braking.

The real question is, why does it happen more often with Teslas than with other cars equipped with radar AEB? Maybe Tesla’s is just more sensitive.


heh, i guess their fix to that is to just get rid of radars altogether.


Exactly. There's a big difference between approaching this problem with a "first do no harm" perspective and a "move fast and kill a few people" perspective.

And this part from the previous poster strikes me as a big problem: "They were lucky it didn't lead to an accident and as long as they keep the fleet to 600 cars then yeah, the accident rate will be much lower than Tesla's AutoPilot, shipped in more than a million cars."

That seems like an excellent reason to keep the number of active cars very, very small. Rather than, as stated, an excuse for shrugging at a death rate at least 1667 times higher.


100% agree! And I don’t think the approach needs to be “first, do no harm”. I would be very happy with “move at a normal pace and do your best not to kill anyone.”

But “move fast and kill people” is ludicrous and it’s exactly what Tesla is doing.


> We have a choice between "test software we know can't handle all situation on real roads and make it better based on that testing" or "we'll never have self-driving software".

This is like saying that we'll never have a cure for cancer if we can't experiment on the public without their consent.

The bar for medication, at least, is proving safety first before testing on large amounts of people and allowing the public to buy it.


> The bar for medication, at least, is proving safety first before testing on large amounts of people and allowing the public to buy it.

Except for vaccines, apparently...


> We don't have a choice between "safe and unsafe" way of developing self-driving software.

It's also entirely possible Waymo hasn't achieved peak perfection in its software development practices despite doing better than Tesla, and that another entity could do it more safely.


How many unwilling people are sacrificed because we won't ban alcohol and all impairing drugs without exception and then enforce those bans with immediate, Judge Dredd-style summary execution?

I bet you that'd reduce traffic fatalities dramatically too.

How far do you want to go to 'save lives'.

I guarantee you with 100% certainty I can design a society that will 'save lives' at every turn for every single activity, and I can guarantee you with 100% certainty you wouldn't want to live in it.


You seem to ignore that we've already tried banning alcohol and currently ban many drugs. We scrapped banning alcohol for a reason.


you are arguing a fake hypothetical. Tesla accident data shows that their autopilot feature reduces accidents.

https://www.tesla.com/VehicleSafetyReport

fortunately our regulators don't take a one size fits all approach for how new technologies can be developed


It doesn't, as it compare general rates with self selected "good driving conditions" as defined by the software - only highways, only good enough weather, only good enough maintenance state of the car.


Your notion is that everything is fine because according to Tesla, a company with a leader known for telling whoppers, they are killing less people on net?

Even if we trust them on that stat, which I certainly don't, that still doesn't mean they aren't killing people unnecessarily.


If FSD is accurately described as “not always worse than a drunk driver” then please remove it from the roads.


I can't withdraw my consent to share the road with intoxicated or distracted drivers. That's just a fact of life. That doesn't mean I shouldn't be able to withdraw my consent to share the road with Beta software.

Plenty of businesses are able to build very effective safety mechanisms for motor vehicles without subjecting the general public to Beta software. To me this is a case where the ends do not justify the means.


I understand your point, but I would frame it stronger: Indeed, we, as a community, have withdrawn our consent to share the road with intoxicated drivers. You break laws if you drive intoxicated. That beta software doesn't break any laws, but maybe it should.


> We should fix this with a better system.

What if we made driving tests far more difficult (otherwise, you can drive a 50(?)CC moped that tops out at 35mph)? What if we had government subsidize rides home after going out drinking? There are a lot of solutions that don't involve "let companies test 5000lbs autonomous vehicles on public roads." Heck, those companies have enough money let them buy a city and test it on the now private streets.

> Tesla and Waymo seem to be making progress. I don't expect them to be perfect but in the long run this will save lives.

Let's assume for a second your unstated assumptions I was alluding to above are correct. This is the only way to make a better system and that system has to be tested on public roads. Why do we allow two private companies to reap the inevitable huge monetary rewards when we all pay for that system with a higher increased risk of dying in the meantime?


nah. just because some drivers do stupid things does not mean that we should allow this "FSD" travesty. The thing is: if you're a human driver and you screw up you pay the consequences. It's codified in the law and you are fully aware that there are consequences if you don't follow the rules.

Now, if you drive one of these "FDS" cars and it ends up in an accident where people die, who is responsible? Are you responsible? Is the car manufacturer responsible? Do we just write it off as a freak accident with 1 in a million chance of happening again?


FSD != auto pilot


yes of course. fsd is only 3 letters but IMHO they’re both lies.


> Most companies in this industry, thankfully, seem to be moving carefully and rolling out their products conservatively;

Having used other products, I think this is objectively not true. I've seen "Pro" pilot accelerate itself into its own collision warning and randomly fail at basic curves.

I've seen video of Ford Copilot failing due to glare and jerking toward trees.

For some reason, Tesla just gets more attention. Some of the other beta-like behaviors are actually worse.


Tesla gets more attention because their CEO keeps promising the world[1] and selling these features as "Full Self Driving" instead of "Copilot".

[1] https://mobile.twitter.com/elonmusk/status/68627925129377792...


The others claim that you can drive without your hands on the wheel, despite the fact that the time required for takeover can be milliseconds.

I'm really not convinced that is objectively better, even with whitelisted roads.


I haven’t seen this, but I’m out of the loop. Which ones are claiming this?


GM SuperCruise right now, and Ford is claiming that their "Blue Cruise" product will be able to do it soon. There may be others, but those are the two that I'm familiar with.

I've also been observing a general trend of reviewers rating the performance of new vehicles based in part on how long it can go without nagging the user to hold the wheel. sigh


Quite a lot of milliseconds in fact. For a visual stimulus it is 1/4 of a second minimum. That's 22 feet at 50 mph.


I think Tesla should definitely be accountable for their software and damage that it causes but involving overly risk adverse regulations will push out self driving cars by a decade, maybe indefinitely.


Innovation always precedes regulation/safety. Look at the early automobile and the road marker[0]

[0] https://en.wikipedia.org/wiki/Road_surface_marking


I agree with your point that government regulation is fundamentally reactive to private sector innovation, hence lagging. That being said, this particular issue of autonomous driving has been a hot topic for the better part of a decade now and I would like our governments to tackle it.


>I believe this is a case where self driving software should be default illegal until proven safe.

If we go with your suggestion, what would you consider "proven safe"?

Autopilot has been running in hundreds of thousands of cars, has driven several billion miles, and we still don't have enough data to prove whether it is safer or more dangerous then an average human driver. Accidents are rare and we therefore need a huge amount of data before we can be confident that the accident rates we are seeing are predictive. I have no idea how you collect that type of data without them being tested on real roads with other real drivers.


How about actually passing some basic tests before they're deployed?

(Warning: unnecessarily loud video for some reason): https://twitter.com/finance_degen/status/1307529357951467531

Lets start with passing these basic tests before selling them as working features. I don't think we're asking for a very high bar.


There is zero context on that video that tells us what is happening there. We have no idea if that is the Autopilot failing or the emergency braking failing. We also have no control group to tell us what percentage of humans would stop short of that dummy. It is inexact due to Twitter's video player not showing fractions of a second, but it looks like there were approximately 2 seconds between when the dummy started moving forward and when it was hit by the car. The average human time to braking is 2.2-2.3 seconds[1]. Is the car even failing that test in comparison to a human?

It also isn't clear from watching that video what the safest and therefore desired behavior should be in that situation. A self driving car is obviously not going to prevent all accidents, so it is a question of minimizing potential harm. We don't want a car to aggressively brake whenever someone at a street corner takes a step towards the road. We therefore need to balance the chance of a person stepping into the path of the car with the risk of braking when it is unnecessary and causing a rear end collision. The problem in the linked thread is overaggressive braking so forcing the car to pass a test that rewards overaggressive braking would only make that specific example worse.

That leads back to my point about needing a huge amount of data. You can't just run a car through an obstacle course to know whether it is safer or more dangerous than a human. You need to have it interacting with unpredictable humans and you need to do it repeatedly before you can confidentially predict whether it is safer or more dangerous than a human.

[1] - https://copradar.com/redlight/factors/IEA2000_ABS51.pdf


> We also have no control group to tell us what percentage of humans would stop short of that dummy.

IIRC, Subaru and other companies pass these simple emergency braking tests 100% of the time.

That test was a Chinese test IIRC, but the software doesn't change between countries. Similar tests have been done here in the USA by insurance groups to set insurance rates, but a government-mandated test for what "emergency braking" really means (before you "sell the feature to the public", lets actually have a government-mandated test similar to that video).

You shouldn't be allowed to call your stuff "autopilot" or "full self driving", or "emergency braking" or "pedestrian avoidance" (or some other set of words) unless you can... you know, avoid pedestrians and emergency brake in a well-controlled test.

Avoiding balloon people is enough. But its a well known fact that Tesla repeatedly fails at these simple tests, when other groups (ie: Mobileye group / Mobileye hardware) manages to emergency brake in time.

----------

IIHS test: https://www.iihs.org/news/detail/performance-of-pedestrian-c...

The issue is that 3rd party non-government groups (ie: IIHS) are the ones running these tests. There's no advocacy group for US consumers as far as I can tell. IIHS is primarily about serving their master (insurance companies).

Don't get me wrong: IIHS is doing good work here. But its not their job to protect the consumer.

EDIT: I got my sources mixed up. Tesla apparently passed the IIHS test.

It was the AAA test they failed: https://insideevs.com/news/377427/video-tesla-model-3-failed...


>IIRC, Subaru and other companies pass these simple emergency braking tests 100% of the time.

They do not pass 100% of the time unless you have a very narrow definition of "these simple emergency braking tests". No emergency braking system is foolproof.

>But its a well known fact that Tesla repeatedly fails at these simple tests,

You say this while at the same time the source you include has Tesla in the middle tier of results.

Either way, my point is not that Teslas are safe or that they perform well on this test. The point is that this test does not tell you whether a car being driven by Autopilot is safer than a human.


> They do not pass 100% of the time unless you have a very narrow definition of "these simple emergency braking tests"

Lets get them working consistently under well defined, standard, simple, emergency braking tests before worrying about the real world.

Like not hitting a balloon dressed up as a pedestrian during clear skies in sunny weather. I don't care about rainy days until we get the bright / sunny weather figured out.


>Lets get them working consistently under well defined, standard, simple, emergency braking tests before worrying about the real world.

Automatic emergency braking is the exact wrong feature to use for your example. Either the driver sees the pedestrian, stops in time, and the automatic emergency braking is of no use or the driver would have hit the pedestrian and any effort from the automatic system is a benefit. This is the type of feature that should be deployed as soon as possible assuming it is not tuned too aggressively to stop at false alarms.


> we still don't have enough data to prove whether it is safer or more dangerous then an average human driver.

I would point out that given the dangers involved with accidentally turning over a million vehicles into autonomous 5000 lbs missile, erroring on the side of caution seems fine. The benefits are quite low: if the autopilot had been on since inception between 20 and 100 lives would have been saved (I accept your "several billion miles" number and point out that that the average fatality rate is 1.1 per 100 million miles driven, but that is based on averaging in 40 year old cars with fewer safety features and shrinks ever year.). The costs could be astronomical: a simultaneous failure (security, mistraining, date bug, whatever) could result in hundreds of thousands or millions of deaths.

Which means there are three errors to consider. (1) Obviously, some things (bugs, exploits) are unknowns and there will always be an inherit risk there. I would say that these risks may forever make self-driving cars too risky. (2) It is difficult to come up with any actual test of driving skills. This is especially true because any test will suddenly become the target so we have to have the test cover everything. (3) Actual driving errors: Both of the above assume that the AI can drive as well as a person. That's obviously difficult to do. And we would need to see a huge improvement to justify adding a new risk factor.


>The benefits are quite low: if the autopilot had been on since inception between 20 and 100 lives would have been saved

This is only the case if you look at the current system as the finished product. The biggest benefit is that it gets us closer to a true self driving system. That would not only save millions of lives, but it would revolutionize logistics and economics of transportation which can in turn reshape society.

>The costs could be astronomical: a simultaneous failure (security, mistraining, date bug, whatever) could result in hundreds of thousands or millions of deaths.

I have no idea what scenarios you are imaging that could lead to "hundreds of thousands or millions of deaths." Almost every Autopilot death in the US makes national news. There is no way hundreds of people could die without there being some type of intervention in the system.


> Autopilot has been running in hundreds of thousands of cars, has driven several billion miles,

Has it really though?

> I have noticed for me at least it started happening after I updated at 2021.4.18.11

This implies that the functionality of the AutoPilot is constantly changing, presumably meaning each version has thousands of miles rather than AutoPilot having 'several billion miles'. It doesn't seem like you can trust past performance is the users are to be believed.

My assumption is that OTA updates won't be allowed once this stuff starts requiring certification.


I think the first half of your comment is a pointless semantic debate. The Autopilot system has driven billions of miles. Those miles obviously all aren't equally relevant. The older miles lose value as the hardware or software changes. However those miles don't all become worthless anytime there is any software update.

>My assumption is that OTA updates won't be allowed once this stuff starts requiring certification.

It is unclear whether this would actually be safer or not. I am reminded of how both Tesla[1] and Toyota[2] had similar software problems with their antilock brakes. Both companies had a software fix relatively quickly. Tesla deployed the fix immediately to cars through OTA updates. Toyota issued a voluntary recall meaning its cars wouldn't be updated to the fixed software for months, years, or potentially ever.

[1] - https://money.cnn.com/2018/05/30/technology/consumer-reports...

[2] - http://www.cnn.com/2010/BUSINESS/02/09/japan.prius.recall/in...


it is a bit scary to think we'd have to lobby government to allow any new invention to prove "safety". Tesla has indicated why it needs to enable these features in order to collect data to improve them, it has also shown that driving under autopilot are already reducing accidents (compared to without and national stats for all vehicles), and yet you call for a entire ban instead of thinking constructively. with the vision stack there will be improvements to the driver attentiveness checks. that would seem to mitigate abuse of the features, which is clearly meant to be supervised at all times by the operator.


It's quite simple, really: any software that pretends that it can be in control of a car should be subject to the same kind of test that ordinary drivers have to do before being allowed to take to the road. Then, the manufacturer should assume all liability for errors made by their product. Just like a real person would. They can choose to insure or self insure.


Those successfully tested ordinary drivers also kill many people every year. In 2019 in the US alone more than 35.000!

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...


Under all circumstances, over all of the miles driven. Tesla is very good at spinning things but in an apples-to-apples comparison they are doing much worse than those ordinary drivers.


Where do you have apples-to-apples comparison? As far as I know, this does not exist.

NHTSA is investigating Tesla, some 20-30 accidents. If they find that Tesla is doing something seriously dangerous, I'm sure they will force Tesla to take corrective actions.

The same applies to European regulators.


> Where do you have apples-to-apples comparison?

https://www.forbes.com/sites/bradtempleton/2020/07/28/teslas...


> But the Autopilot record ballparks to 1.1M miles between accidents off freeway and 3.5M on-freeway.

> By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 479,000 miles.

So, Tesla numbers are better even in the article you link, but still not apples-to-apples comparison. Accident vs. crash.

And Tesla is adding pro forma camera driver monitoring.


Cherry picking the one line that can be twisted like that is a bit silly don't you think?

Let's start with this segment of the title:

"Teslas Aren’t Safer On Autopilot"

Then, further down:

"Teslas are not safer with Autopilot on"

"Of the 2.1M miles between accidents in manual mode, 840,000 would be on freeway and 1.26M off of it. For the 3.07M miles in autopilot, 2.9M would be on freeway and just 192,000 off of it. So the manual record is roughly one accident per 1.55M miles off-freeway and per 4.65M miles on-freeway. But the Autopilot record ballparks to 1.1M miles between accidents off freeway and 3.5M on-freeway.

In other words, about 30% longer without an “accident” in manual (with forward collision avoidance on) or TACC than in Autopilot. Instead of being safer with Autopilot, it looks like a Tesla is slightly less safe.

But not a lot less safe. And if the predicted 3:1 ratio of accidents freeeway to non-freeway is too high, it might even be about the same. But almost certainly not 1.5 times better as Tesla’s numbers imply."

Which I think is a pretty fair and even handed evaluation of the available data.


You are comparing Tesla on Autopilot vs. no Autopilot, while I was responding to your claim that Tesla on Autopilot are doing worse than ordinary drivers.

> Tesla (...) are doing much worse than those ordinary drivers.

That was response to:

> Those successfully tested ordinary drivers also kill many people every year.

The not-apples-to-apples comparison indicate that Tesla are twice less likely to cause accident/crash when on Autopilot, and even less when not on Autopilot.

I find it also highly likely that Autopilot slightly increases the chances of accident right now.


>where self driving software should be default illegal until proven safe.

Would you apply the same standard to human drivers?


Yes. It's called a driving license, in many countries it's only delivered after you show you can drive safely.

On a more similar note, that's what we apply to aircraft. They are illegal to fly until proven safe to the authorities (ie. certified).


The same standard is applied to human drivers. I think it is called a "driving test".


> ”I believe this is a case where self driving software should be default illegal until proven safe.”

You have to remember that human drivers are extremely dangerous, and cause millions of deaths and serious, life-changing injuries worldwide every year.

Any regulations that hold back the development of self-driving software would likely be counter-productive. It’s a bit like arguing that we should have held back Covid-19 vaccines because a small number of people had blood clots or heart inflammation.


no one in government is qualified to really understand ML and to regulate that stuff. Does the government employ people at the level of Tesla in ML? Its quite the problem... when the best minds are in the tech enclaves are not working for the public interest.


We don't need any understanding of AI/ML in government to effectively regulate the autonomous vehicle industry. Design an appropriate set of tests, make companies run the gauntlet, only approve the ones that pass. The test criteria is simple: does this software meaningfully and statistically significantly reduce the risk of accident/harm/death compared to the average human driver? Add caveats and conditionals as you wish for conditions/weather etc. but it's fundamentally a black box test with no knowledge of technical internals required.


> statistically significantly reduce

And how does one prove something is true "statistically"?

By doing it many times. So many times that you can be statistically confident of the result.

Statistically, there is ~1.4 deaths per 100 MILLION miles driven.

To prove, statistically, that software is as good, it would have to drive at least 1 billion miles. And yes, it would kill 14 people in the process (or more, if it's worse than humans; or less, if its better).

Tesla is already doing what you say car companies should be doing i.e. statistically testing the software

Except you don't like it and think there's a magic fairy test that will show something "statistically" without driving statistically significant number of miles.


Interesting factoid: the human brain processes sound faster that sight [1]. The difference cited in the paper below is roughly 40 ms, which is small in an absolute sense, but compared to the relative time tolerances we are discussing here in this parent article, it's huge!

Naturally this effect cancels out if all competitors get the same visual cue, however it's still to the benefit of athletes and fans to want quicker reaction times:

- Shorter overall reaction times means faster races means better records

- The standard deviation of reaction times is smaller for sound than for sight, which means the reaction time is more fair to all

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4456887/


Vision is amazingly sluggish. Turning an incoming photon into a nerve impulse involves a whole mess of slow (and fascinating) chemistry, just to leave the rod or code. Once it does that, the resulting signal bounces around the retina and then a huge portion of the brain before it's available for "action."

The auditory system, on the other hand, is optimized for speed. It has a giant synapse (=connections between cells), called the Calyx of Held, that is specialized for extremely fast (sub-millisecond), reliable transmission between cells. They're really cool looking: https://www.eurekalert.org/multimedia/pub/213595.php?from=44...


There's some nuggets of truth in here, but I am disappointed that this article sidesteps what I feel is the most important reason for startup success in 2020: easy and abundant access to cheap capital.

- Interest rates are at all time lows, borrowing is cheap

- The Fed's balance sheet is at an all-time high. The economy is flush with cash, particularly the investor / VC class

- This excess cash creates an (arguably artificial) wealth effect and drives an appetite for risk

- Large unicorn startups that are perpetual money losers continue to operate only because they are effectively subsidized by regular capital raises. Look no further than all the Silicon Valley darlings such as Uber, Netflix, AirBnb, Tesla, and so on. All of them would cease to exist without continued capital injection from secondary share offerings or VC raises

- These companies achieve growth and put pressure on the competition by offering their services below the real cost that would be needed to achieve profit, hence driving huge share price growth

- This share price growth attracts new investment from the momentum-chasing crowd, increasing appetite for subsequent secondaries, and then the cycle repeats

I don't mean to be cynical, but it's hard to see this ending well for some of the nouveau riche. Tech has been a great avenue to riches by offering real innovation in some cases, but the article's error-by-omission really gives the wrong impression.


> - Large unicorn startups that are perpetual money losers continue to operate only because they are effectively subsidized by regular capital raises. Look no further than all the Silicon Valley darlings such as Uber, Netflix, AirBnb, Tesla, and so on. All of them would cease to exist without continued capital injection from secondary share offerings or VC raises

You should look up the financial statements of the companies in your list.


Apologies for some hastily chosen examples. I think the point still stands if you consider the following companies: WeWork, Lyft, Snapchat, Pinterest, Dropbox, Slack, Casper, Lime, Peloton, Beyond Meat, Wayfair, Zillow.

More generally speaking, take a look at Goldman Sachs' Non-Profitable Technology Index:

https://pbs.twimg.com/media/EsRVCiMXIAE7xlA.png


Would you have put Google and Facebook in this group when they weren't profitable? You are missing something - many companies have great long term economics even if they are losing money right now. Slack (if still independent) could easily be profitable - they just have to spend less growth (ie, largely they could cut their sales and marketing dramatically, along with other items). Snapchat the same, Pinterest the same. Some of the others I believe you might by and large correct. The general statement that lots of companies are giving away dollar bills for $0.80 isn't right.


This is a common argument but would seem wrong to a lot of people. It's just unintuitive. Most people of our parents generation would find this argument slightly absurd, and our grandparents would find it entirely absurd.

It sounds weird because it assumes that every tech business has either insurmountable lock-in or insurmountable first mover advantage, without explicitly stating that. To fill in that assumption requires a lot of cultural knowledge of tech - the vast majority of businesses don't have either.

The problem is these assumptions are very likely incorrect for most modern "tech" businesses. It looks like over-generalisation. For example:

1. Slack has relatively little lock-in. The last company I worked for was in the process of migrating to Teams when I left. The justification was cost savings. Slack is trying to build network effects and lock-in with shared inter-firm channels, but most employees don't need to interact with other firms at least in today's business world, so even if Slack becomes the Bloomberg Terminal for the rest of us, it won't be the foundation of a huge business: only the people who need to communicate with other Slack-using firms will have Slack accounts and they charge by account.

2. Snapchat is a social network, and the iron law of social networks seems to be that they're at the top for only a relatively short period. Facebook is by far the longest lasting but even so, they've had to shore up that position by buying Instagram and WhatsApp. Snapchat's value won't last forever, so burning cash to get to the top in the hope of monetising it over the long run seem a bit optimistic.

3. Uber is basically a taxi firm. There is no moat there. Me using Uber doesn't really make it more useful for you, except in the sense that it attracts drivers. But drivers are capable of using multiple apps at once and switching between them. If Uber's prices were to increase really significantly, their market share would probably go into free-fall yet the hallmark of a company with lock-in is that they can charge very high prices for decades without facing competition.

4. Amazon was never able to convert high market share in retail to high profits. Its profits come mostly from AWS: a pure tech supply chain business.

It's also worth noting that Google and Facebook became profitable quite quickly relative to the sorts of companies people are criticising these days. It took Google less than 6 years to reach mega-profits. There are now firms that are doing Series G (!) raises, which aren't profitable after 15 years.


1. Good unit economics can come from various sources. The idea that Slack has little lock in and hence can't have good economics is misguided. Coke has little lock in, yet amazing economics. Google has little lock in, yet amazing economics. Sources of competitive advantage aren't restricted to lock in. Scale is a source of competitive advantage, for example.

2. Your point is hard to refute and may be correct - it isn't obvious they will do well or poorly over time.

3. This is almost certainly wrong. If Uber had no moat there would be more than Uber and Lyft in the US. It's going to be impossible to enter this market for a new player short of having self driving all worked out.

4. Yep, it wasn't/isn't clear on Amazon's retail biz, I'll agree with that.

You have some examples which are correct - there are specific situations where you are right. But the statement cannot be generalized.


For (3) it isn't required to have a moat. Market dumping is sufficient. Nobody is going to compete with a firm that's selling below cost, as Uber/Lyft are doing. But, I've used non-Uber/Lyft taxi apps that were perfectly competent. It's not that hard to build such an app, especially if you 'just' want to sell taxi rides in local jurisdictions instead of any conceivable moving service in every possible geography. If/when they stop burning money, only then we'll find out what kind of moat they have.


Uber is a public company. They do over 50% gross margins. One may reasonably suggest some of their marketing expense should be in the COGS, but even if you take all of it, they are still GM positive. Similar story for Lyft (also public).

These stories persist, of companies deliberately running their companies so as to give away a $1 for $0.80 or whatever - but with very few exceptions it's a bs story.


I think we've reached the depth limit here, so this is an answer to below.

I'm spinning it as positive because I'm attempting to break out their fixed and variable costs. If one looks through the line items and imagines themself as CEO, there are items which can easily cut and/or don't need to climb as revenue climbs. EG, their R&D is probably too high and doesn't need to double for their revenue to double. By breaking out their costs one can figure if they are likely to be "perpetual money losers" or just "currently money losers", which is the question at hand.

They are valued in public markets at over $100 billion. Usually the public markets do a reasonable job at getting a reasonable number for a company's value. Thinking about Uber's cost structure in some depth is how folks have arrived at that number.


Yet they make a loss. A $6.7 billion loss if I read their 2021 financial report correctly. I don't understand how you can spin this as "positive". Gross margin positive is not the same thing as being profitable, which is what we're talking about.


This is an excellent answer and took the thoughts right out of my mind.

To elaborate a bit for GP: No I would not have classified early stage Google or Facebook or Amazon as "perpetually unprofitable tech" because, frankly, they clearly weren't. All 3 of these companies had strong, obvious moats that enabled them to preserve pricing power. All 3 of these companies had a small handful of initial cap raises and have grown entirely via Free Cash Flow ever since. The fact that all 3 of these companies continue to operate with staggering profitability decades later confirms this. (You can of course debate the ethics of having such a moat, antitrust etc., but you cannot deny it's there.)


It's incredible that Amazon was able to build such a moat in the early days. Logically, you'd think that somebody like Sears (which became a huge company in the first place by offering the convenience of shopping from home!) would have made a big early investment in online shopping and become gigantic.


Why Dropbox? Did something change?

I haven't paid attention for many years, but at one point is was an extremely profitable company.


Cloud storage has become a commodity and cheaper elsewhere.


Are the fake-meat companies tech companies? I thought they are more like contract manufacturers, brewers, or other industrial foodstuffs.

No doubts on the access to cheap debt, though.


WeWork was also never a tech company but pushed really hard to brand themselves that way. If evaluated truthfully as a real estate company, the money they raised was hilariously idiotic.

So much of this world is driven by idiotic speculation based on slick websites and charismatic presenters.


I just finished reading Billion Dollar Loser and I think you might like it.


There's a ton of tech behind Beyond Meat. Their core science is based on research from an RNA professor at Stanford. That proteomics research is the reason their burger tastes so much better than previous generation veggie burgers.


If you're referring to Patrick O. Brown, you might be thinking of Impossible Foods, not Beyond Meat.


Tech company means they deal with software. Beyond Meat isn't a tech company.

Or is pharma part of tech in your definition? What about NASA?


I think it depends purely on your definition of "tech".

Impossible engineering soybeans to produce more heme to make fake meat behave more like meat is a technology, in that it is a novel innovation applied to solve a real world problem.

But in modern common parlance "tech" tends to mean either that a company's offering is either entirely or heavily augmented by new software, or that the company has ties to a specific network of talent/investors/etc, or that the company has very low incremental costs per user. In those cases, they might not be a tech company.


How are they not?

It's technology innovations in food rather than electronics, but it's still technology innovations that are based around disrupting the existing industry.


Marketing companies would be my opinion. Fake meat has been around for years, but has become trendy again in the last couple of years. (Maybe it's better now, but that is subjective).


About half the companies you mentioned are cash-flow positive, so their economics work, they just have accounting for depreciation and other lines on their balance sheet. They are default alive companies using accounting practices to avoid taxes, but they have more money coming in than going out


> These companies achieve growth and put pressure on the competition by offering their services below the real cost that would be needed to achieve profit, hence driving huge share price growth

Which I find weird, if I sell fruit at a loss to run a local competitor out of business as a major supermarket it's illegal predatory pricing (or at least was when I was growing up), yet do it to an entire industry and it's fine. Maybe this is just one of those US exception things.

Missing (or implied) by your list is that incumbents are not an all-or-nothing gamble based on other people's money, so are reluctant to engage in such tactics themselves and will suffer for it.


I don't think this covers all of this - but I feel like the reason it could never be investigated is that most of these things are only unprofitable because of the administration. Not on a per item basis.

Netflix may lose money on their show but clearly by giving you a month free and then charging just $12 a month they are making money on that unit. AirBNB is making money on each additional rental they do even if historically that didnt cover all marketing and tech.

For a store selling berries if they buy it for $3 and sell it for $2 then it is more tangible (though Im sure today they could just call it marketing)


I guess that makes sense and is one consequence of moving so many things to an abstract 'service' model. Many services are essentially free to provide - until you count all the other overheads any business also has to pay for. It's not as simple as buying and selling berries like you say.


Your comment leaves me wondering if we're reading the same essay.

PG specifically states that the "main reason it's easier to start a startup now is that it's cheaper". And, "cheaper" comes in the form of lower infrastructure costs, lower advertising costs, and lower cost of capital.

>> But the main reason it's easier to start a startup now is that it's cheaper. Technology has driven down the cost of both building products and acquiring customers...now investors need founders more than founders need investors, and that, combined with the increasing amount of venture capital available, has driven up valuations.


I'm not sure what your point is . The article says it "cheaper to do". The person you're responding to says "there's access to cheap capital".


> now investors need founders more than founders need investors, and that, combined with the increasing amount of venture capital available, has driven up valuations.


These companies could easily cut their marketing budgets in half and basically be profitable. They could also cut their R&D and focus only on their main money streams and be profitable. There is just no reason to be profitable, when you can raise more money.


I think everyone just wants to get a piece of the future. The companies of today, if successful, will become huge conglomerates given the accelerated globalisation.


Netflix and Airbnb are profitable, and as far as I can tell the others could be profitable but aren’t because they want to stay huge / have the capital access to do so. When you have unlimited money pouring in, whats the point of profit other than a checkbox for wall street? You can pay everyone their wages, compete fiercely, grow like crazy, and so on. Profit is just inefficiency - we don’t have a use for this money so we just put it in the bank.

I don’t disagree that Uber and Tesla and many of these are on life support currently, but from a game theory perspective, if you have unlimited access to money, why bother being profitable when you can just spend it getting bigger and smarter?


There is one fairly obvious reason: survival is the great filter. Anything that's not under survival pressure is almost by definition not good in the long run. If something is bad, you want it to die quickly before it becomes too big to fail.


Isn't that in a way a ponzi scheme? Companies that don't bring value and need a supply of investors... that does ring a bell. Why regulator doesn't look into it?


Nono, lots of differences. In a ponzi scheme, the early investors get paid by the new investors and the company doesnt have any real business other than gathering new investors


You and GP are arguing two different things and are both right.

GP's claim is that Tesla would not be profitable without regulatory credit sales: this is true. Tesla's profit for 2020 is $721M and its credit sales for 2020 are $1.58B, just over double. It's fair to say that, were those credit sales to fall to zero, Tesla risks losing its profitable status. Here we're effectively discussing net profit margin for the company as a whole.

Your claim is that Tesla's automotive gross margin on car sales is 20%. This is also true, but only includes COGS (Cost of Goods Sold), so car parts and assembly costs. It does not include other expenditures such as CapEx or R&D. 20% sounds great (and it is), but when we look at the net profit margin, $721M of profit on $31.54B of revenue gives only a 2.2% net profit margin which is not as impressive.

It's therefore rather unfair to say that GP's claim is a misconception, it's actually perfectly true.


He said the more cars they sell, the more money they would lose without credits, which is a misconception.


I too am skeptical (but happy to be proven wrong) about Honda's L3 claims. I think the "automaker optimism" is really just referring to Tesla loudly claiming they "will hit L5 this year" every year since 2015 while continuing to struggle. I don't see other automakers making such aggressive promises, thankfully.


I am hopeful coming from Honda that they have solved some of the critical issues. I was really impressed with their rider assist [1] for their new motorcycles and the ability to balance a bike. Perhaps that is a small feat compared to what level 3 will require but I am confident Honda will pull it off. This opinion is also based off the fact that I have a couple 37 year old Honda motorcycles that still to this day run strong and hard. I am just optimistic they can do what they say.


They mention in the article that the smell is produced synthetically (presumably for this exact reason).


If the natural compound is not shelf stable I wonder what the difference with the synthetic is.


The synthetic could be more powerful, something like the difference between THC and synthetic THC.


Perhaps you're confusing THC with synthetic cannabinoids. The molecule doesn't care whether it's made by plants or at a lab.

Plants yield a mixture of cannabinoids at some concentration, which constitute ~10% of the dried plant matter. These can be extracted and separated with various techniques. The effects of pure THC are different than that of the mixture.

Upcoming schemes use engineered micro-organisms to produce specific cannabinoids, efficiency of the systems is ramping up. My understanding is that the synthesis of cannabinoids is is quite an inefficient way of manufacture.

None the less, it's all the same - covalently bound atoms :)


Please try refuting the ruling on its reasoning instead of peddling conspiracy theories.


Your claim relies entirely on the veracity of Tesla's safety report. Unfortunately I am inclined to mistrust any such vehicle safety claims directly from Tesla's own website. There is a conflict of interest in that they are incentivized to publish the most favorable numbers. We should be rightly cynical and rely only on numbers from an independent third party, as we would normally do for other companies with a less favorable reputation. Until we have such independently verified stats, we can't take those claims at face value.

I agree that anyone who feels uncomfortable with Tesla's ADAS should simply turn it off. But don't you think it's unfair for the company to place an unfinished product in the hands of a consumer and pass off the risk-management responsibility to them? Tesla's cavalier attitude towards autonomous vehicle safety is concerning to me. They seem to adopt the "move fast and break things" approach, whereas teams like Waymo and Cruise are releasing things in a slower and more controlled manner. And not coincidentally, they've have far less accidents that way.


If they truly believe that the autopilot driving is safer than unassisted, they might reasonably consider it an immoral act not to ship it. Given the number of lives lost in auto accidents every day, the issue is not as clear-cut as you're suggesting. It's a real-world trolley problem.


No. If they know that it’s safer in some respects but unsafe in others they should make sure it’s only used as an assistance system and make sure the driver is still watching the road when it’s turned on.

What’s immoral is that they allow people to turn this thing on and then play Candy Crush on their phone while the car does the driving, fully aware that this might lead to fatal accidents in some cases. Their marketing is misleading because they suggest that the system is safe and they don’t inform customers about the freak accidents that can happen when the system glitches (bad for sales). That’s not very ethical if you ask me.


They require pressure on the steering wheel and make it quite clear that you're responsible for taking control at any time when you activate autopilot.

You and I probably agree that this might be wishful thinking to expect some portion of the driving public to do so faithfully.

Perhaps additional technology can help better guarantee attention and participation on the part of the driver.


> If they truly believe that the autopilot driving is safer than unassisted, they might reasonably consider it an immoral act not to ship it.

Then Tesla should be given it away, not charging extra for it. To do otherwise would be immoral.


That’s an absurd suggestion. Every car company charges more for advanced safety features


that's exactly my point, morality has nothing to do with it


They do.

Autopilot is standard in every single car Tesla sells now, for exactly this reason.

Here is the blog post on their website where they talk about making it standard: https://www.tesla.com/blog/update-our-vehicle-lineup


As well as bundling it prices have been bumped up, to me that is not giving it away for free.


Sorry I misunderstood your previous comment. I assumed you meant that they should include safety features for no extra cost, which they do, not that they shouldn't charge anything at all for safety features, which is such a weird and extreme take that I'm not sure what to even say...


my original comment was around the point of mixing morality and business.

> I assumed you meant that they should include safety features for no extra cost, which they do

I could buy a car without it for 37500 and now I can't but I can now buy a car with it for 39500 and you would still consider this is no extra cost?

I must be misunderstanding what you are saying


There’s no moral problem with increasing the price when it becomes a better product.

There arguably is a moral problem selling a product with optional safety features that cost extra.

If tesla invented immortality fields within their vehicles, I have no doubt they would jack up the price, and that would be fine, because it’s now a much better car. To think otherwise would be silly.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: