The SF Chronicle reports that Uber’s self-driving car was equipped with sensors, including video cameras, radar and lidar, a laser form of radar. Given that Herzberg was dressed in dark clothes, at night, the video cameras might have had a tough time: they work better with more light. But the other sensors should have functioned well during the nighttime test. But now, Uber has reportedly discovered that the fatal crash was likely caused by a software bug in its self-driving car technology, according to what two anonymous sources told The Information. Uber’s autonomous programming detects objects in the road. Its sensitivity can be fine-tuned to ensure that the car only responds to true threats and ignores the rest – for example, a plastic bag blowing across the road would be considered a false flag, not something to slow down or brake to avoid. The sources who talked to The Information said that Uber’s sensors did, in fact, detect Herzberg, but the software incorrectly identified her as a “false positive” and concluded that the car did not need to stop for her. The Information’s Amir Efrati on Monday reported that self-driving car technologies have to make a trade-off: either you can have a car that rides slow and jerky as it slows down or slams on the brakes to avoid objects that aren’t a real threat, or you have a smoother ride that runs the risk of having the software dismiss objects, potentially leading to the catastrophic decision that pedestrians aren’t actual objects. Uber car software detected woman before fatal crash but failed to stop – Naked Security
Looks to me like they made the wrong decision in the "alpha" version. So.....when driving down the road on a "normal" day, how many "objects" bigger than say a golf ball really appear in your path that you do NOT want to avoid ??? I propose that the answer to that is NONE.....and the programmers need a good dose of common sense.....right after they get a stout kick in the behind.
so is it a software bug, or programming to avoid a jerky ride? doesn't sound like the same thing to me.
Maybe the answer is to have HUMANS drive the car. This is another case of dumming down the population. We're getting dummed down in enough other ways.
In the first thread on this, I mentioned soon after the autonomous accident that a local teenage girl was run down and killed in a school crosswalk by an speeding driver who fled the scene but was caught later. He's a druggie and a criminal with a long rap sheet; pretty much the human equivalent of a "software bug". Not sure how this is an answer.
You are a bit behind the times, it seems. Programming mistakes have been referred to as "software bugs" for decades now. It tends to make people believe that it really isn't human error......when it usually really IS.
Likely that it IS the same......if it turns out that "programming it to avoid plastic bags" was a tactical mistake. AND, the point IS that calling things like this a "software bug" sounds like you are blaming the code for doing things it should not do. That is only true in science fiction.......so far. When software fails, it is ALWAYS the fault of the humans that wrote it.
Or the hardware that runs it. Or the hardware documentation that guided the programmer. Or the API coder. Or... My favorite was from back in the 1970s when an operator threw a switch when the input transaction tape started rewinding thinking the program was done and he could set up early for the next run. As the switch got thrown, the documentation said "results unspecified". We got this complaint that the US Government was losing about $30 million dollars each time it happened but they didn't know he was doing it. Until a human from the computer company there 24x7 happened to notice. We taped over the switches and your money was now safe again. 37 years in the guts of Operating Systems and I've seen almost everything.
Well then that would not be a "software" problem. Point is: The code itself is NEVER the real cause. It all comes back to an incorrect human input somewhere, whether it is in writing the code or in writing the documentation for the CPU or other hardware. There is no such thing as a "software bug". That is a programmers CYA lie.
Excessive usage of blocking interrupts - blocking all interrupts during memory clean-up causing device drivers to intermittently fail. Vendor error fixed after a hallway conversation at DECUS. Excessive blocking interrupts - user written, IRG clock driver that disabled interrupts and polled instead of using interrupts. Co-worker(s) errors. Overwriting I/O buffer space - channel control program problem built into firmware. Vendor engineering error. Using data interface to report data interface error - locked up interface until computer reboot, design error. Wrote a device interface reset kludge to avoid reboots. Random software failures - no one read the linker output to notice exceeding segment boundaries and subroutine call across same segment boundaries. User coder errors. Error interrupt routine failed to properly save registers - random failures when handing an error. Mine whose correction was blocked by the software configuration management board in 1982 at GSFC only to be fixed in 1990 at MSFC a week before I started work with my new employer. In 1991 I ran out of OS problems but they had these intermittent network problems. So today I used Debian and source code builds of the kernel and drivers. Bob Wilson
The recent hacks have found ways to compromise the firmware in the network and device controllers. Even more clever, EFAIL: The PGP CFB gadget attack was assigned CVE-2017-17688, while the S/MIME CBC vulnerability was given CVE-2017-17689. An email consists of a header and body separated by a blank line. The encrypted body is hard to break. But this hack inserts a line in the unencrypted header. Apparently this can include a forward of the e-mail when unencrypted and read. Decrypting and reading the body is OK but the header modification allows the open text to be forwarded to the hacker. It is a flaw in the email clients that honors these forwarding, header line(s). Dad taught me to be a diagnostician which is good news and bad. The good news is 'the world is broke and I'm here to fix it.' The bad news is 'the world is broke and needs fixing.' We can usually treat ignorance. But teaching someone how to be a diagnostician . . . I haven't figured that out. Bob Wilson
Sun starbursts and reflections... Actual plastic bags and other debris blowing around in the wind... Tumbleweeds... Sprinkler spray... Birds that dive infront and come back out... Lots of things, that's why the software does it. You'd break your neck if the car put the brakes on for every little thing
All this shows that humans have actually been doing a pretty amazing job with the complex task of driving vehicles! At least until cell phones came along . . .
Maybe. But I don't remember encountering most of those things on a regular basis.....and some mostly never. AND there are other actions in addition to just SLAMMING on the brakes. I am of the opinion that it will be a really LONG time, if ever, that we can trust a computer to recognize and properly identify EVERY object that might appear in your path. Because of that, I think it needs to take some action on any object larger than a bird and certainly on any object larger than a 2 year old.......for instance. This strikes me as a system that needs to be 100% perfect or not be in use at all.
Humans are way less than 100% yet allowed on the roads. You hear about auto-pilot crashes but something tells me non-autopilot crashes happen sometimes... Most likely lots of them every day, many fatal. Sometimes a 99.9999% solution is good enough or preferable to a 99% solution.