cross-posted from: https://derp.foo/post/81940
There is a discussion on Hacker News, but feel free to comment here as well.
cross-posted from: https://derp.foo/post/81940
There is a discussion on Hacker News, but feel free to comment here as well.
Because the car didn’t recognize it as a red light, probably due to all the green lights that were facing a similar direction.
The issue is not the speed at which it took the turn, but that it cannot distinguish which traffic lights are for the lane the car is in.
Then why have I been forced to do all those ReCaptchas?
That’s for Google’s cars, not Tesla’s. Why would companies share safety-related data?
Check out the interface screen, specifically from 0:17 to 0:21. I think the navi is operating in a developer mode. It shows what the FSD senses.
Interestingly it seems that it does accurately sense the lights for a moment. But it also erroneously senses them, and the flicker pattern means it probably wasn’t able to come to a confident determination. If that’s the case this thing should have been built to fail safe.
As the operator states, that’s a highway. They were taking quite the risk to chance it a second time, which is the iteration captured on video.
If you’ve watched any of their recent AI talks, they talk a lot about these unusual and complex intersections. Lane mappings in complexe intersections being one of the hardest problems. Currently they’re taking data from numerous cars to reconstruct intersections like this to then turn into a simulation and train it so it learns more and more complex things.
There really are only 2 options.
Solve this with vision and AI, or solve this with HD maps.
But it has to be solved.
If it sees red and green, it should take the safe option and stop until it is sure or the driver takes over.
If it’s unsure, but for whatever reason this failed, it seemed sure.
I’ve had the car slow in unsure situations before so it can and does.
It just got this one very wrong for some reason
deleted by creator