×
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS

Log In

Come Join Us!

Are you an
Engineering professional?
Join Eng-Tips Forums!
  • Talk With Other Members
  • Be Notified Of Responses
    To Your Posts
  • Keyword Search
  • One-Click Access To Your
    Favorite Forums
  • Automated Signatures
    On Your Posts
  • Best Of All, It's Free!

*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.

Posting Guidelines

Promoting, selling, recruiting, coursework and thesis posting is forbidden.

Students Click Here

Self Driving Uber Fatality - Thread II
8

Self Driving Uber Fatality - Thread II

Replies continue below

Recommended for you

RE: Self Driving Uber Fatality - Thread II

(OP)
I am trying to think through the problem of a pure LiDAR vehicle driving down a road looking for hazards. Here is the scenario.

  1. The vehicle moves along the road at some maximum safe velocity, defined by the conditions that follow.
  2. If the vehicle sees a hazard, it must have the option of decelerating to a complete stop at 1/2G.
    This deceleration is slow enough that a vehicle behind will be able to equal this and not rear-end the robot.
  3. There are various moving hazards. A hazard 0.6m (2ft) tall by 0.3m (1ft) across simulates a small child, who is capable of running at 2m/s. An alternate target could be bigger and faster.
  4. The LiDAR must have complete coverage, i.e., there cannot be gaps between the laser spots.
  5. The laser spots must be small enough to resolve the target. It does not have to identify Billy Smith of 123 Any Street, but it must be able to see that there is an object
  6. The scan rate and the resolution must be enough that the object cannot move more than 50% out of the position in which it was spotted during the previous scan. This gives the AI a chance to recognize that these are the same objects.
  7. The target may not be running in a straight line. I think a straight line across at full speed is the biggest problem, but I am not sure.
  8. The time in which the laser fires and the receiver captures the signal, must be less than the laser pulse period. We don't want multiple signals from the same LiDAR.
  9. We need some AI solution for recognizing and rejecting laser spots from the vehicle next to the robot.
  10. Almost all of the problems with speed and resolution are in front of the vehicle. If the vehicle has two LiDARs, one can watch forward with a 40°FOV, and the other can scan more slowly at lower resolution at 360°. On the highway, scary things behind you are big and close. If you are backing up, you are doing it at low speed.
Note how this imposes limits on the speed of the vehicle, as well as on the field of view of the laser and receiver.

--
JHG

RE: Self Driving Uber Fatality - Thread II

That case is certainly the cleanest one from a thought experiment standpoint.

When you add stationary objects, it gets more complicated. Those objects have also moved relative to the LIDAR array. Have they moved relative to the car's predicted path?

What happens when a detected object (such as the small child) moves behind a stationary object (phone booth, mailbox, whatever) and is missing from the data set for a few frames? How does the system handle motions which deviate from what it 'predicts'? That's where things get difficult.

RE: Self Driving Uber Fatality - Thread II

I had a programming project about30 years ago... it was a camera on a fairly flexible 'flagpole' in a parking lot. I had to 'fix' the parking lot in space and accommodate wind blown changes to keep a steady state background. On this background, I had to detect any changes and trigger an alarm... Same sort of an issue, where movement has to be detected on a fixed, but moving, background.

It was a fun project... I later did my first attack resistant reception for this same firm.

Dik

RE: Self Driving Uber Fatality - Thread II

That's an interesting analogy, certainly.

What image processor were you using, or did you write one yourself? How did you handle filtering of the altered POV once the images were converted to the frequency domain (assuming that's the method used...)?

RE: Self Driving Uber Fatality - Thread II

It's fuzzy, but, was in the early days lf EGA and a lot of the programming was using assembly directly to the CRTC controller for real time speed.

Dik

RE: Self Driving Uber Fatality - Thread II

"When you add stationary objects, it gets more complicated. Those objects have also moved relative to the LIDAR array. Have they moved relative to the car's predicted path?"

This was solved in the helicopter OASYS, by placing each detected object into a virtual world within which the vehicle moved. Each object is timetagged for a certain level of persistence, so that if they are moving into your path, you potentially have the option to stop or move into space they vacated.

Note that for a typical car, a 2-ft tall obstacle is way gigantic. Anything taller than about 6 inches is already problematic. Just consider what would happen if you hit a curb at 40 mph; there's the potential of serious breakage of your own car, and the possibility (if only one wheel hits) of being diverted into adjacent or opposing lanes. There's a YouTube channel of a low underpass coupled with a curve in the curb where vehicles miss the curve and the cars either die of broken axles, or get propelled out of their own lanes.

People tend to think that it's relatively easy to create an AI to drive because people can drive almost without thinking about it. But, AIs have yet to be fully capable of even just analyzing images alone, and even after 4 decades of image recognition research, the AIs are barely able to a tolerable job, and there are images that a child could figure out that an AI can't. The human brain, in addition to parallel processing and fusing all its sensory inputs, has a massive associative memory capability that can pull up context and history that aid us in detecting and classifying threats.

Likewise, consider how easy it is for us to walk, and how hard it is for robots to do the same.

For an AI car, there are currently only 4 sensors types that can do much of anything, sonar, camera, lidar, and radar. Much hope has been placed on radar, but the reality is that wavelength of radar is so large, even at 95 GHz, that a phased array is possibly the only solution to get sufficient resolution. If we consider the 6-inch curb at the nominal stopping distance for 40 mph, we need a 46-in wide antenna at 95 GHz to fully resolve that curb at 76 ft. Any aperture smaller than that will result in blobs and unresolved objects. Sonar is limited to around 30 ft. Cameras could do 3D, but require either multiple cameras with massive image processing, and are limited by lighting. Lidar is its own light, and can easily achieve 6.6 mrad resolution needed to detect at 6-in curb at 76 ft.

Every single detection is a potential threat at some point in time or path, so a system architect for an AI needs to consider the same things that were considered for the helicopter OASYS, namely, context, history, persistence, etc. of all detected objects. Google doesn't even do that on Google maps, and it gets easily confused in freeway interchanges, where it suddenly thinks that you are on a different part of the interchange, precisely because it throws away all history about you the instant it gets a new GPS position. I suspect that most car AIs have similar behavior and similar lack of awareness of where it was, what was around it a second ago, etc.

That comes from poor systems engineering, and "backup sensing" is simply applying a poor patch on a poor design. If you need to do that, the design should be ripped up and restarted.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

(OP)
IRstuff,

I am playing with a spreadsheet here, and I need to check my numbers and my logic. If my vehicle is doing 80kph (50mph), my 1/2G stop takes 50m (170ft). My laser can run at 1.5MHz, I can scan at 10Hz, and my spot size at 50m is 270mm (11in). A six inch object will cause some sort of LiDAR return, but it will be a weak one. I think a six inch curb across the road will have a distinct signature.

--
JHG

RE: Self Driving Uber Fatality - Thread II

I think it is important that we be clear to differentiate and be mindful of the different levels of autonomy while we continue this discussion.

https://www.caranddriver.com/features/path-to-auto...

Quote:

Because no two automated-driving technologies are exactly alike, SAE International’s standard J3016 defines six levels of automation for automakers, suppliers, and policymakers to use to classify a system’s sophistication. The pivotal change occurs between Levels 2 and 3, when responsibility for monitoring the driving environment shifts from the driver to the system.

Level 0 _ No Automation
System capability: None. • Driver involvement: The human at the wheel steers, brakes, accelerates, and negotiates traffic. • Examples: A 1967 Porsche 911, a 2018 Kia Rio.

Level 1 _ Driver Assistance
System capability: Under certain conditions, the car controls either the steering or the vehicle speed, but not both simultaneously. • Driver involvement: The driver performs all other aspects of driving and has full responsibility for monitoring the road and taking over if the assistance system fails to act appropriately. • Example: Adaptive cruise control.

Level 2 _ Partial Automation
System capability: The car can steer, accelerate, and brake in certain circumstances. • Driver involvement: Tactical maneuvers such as responding to traffic signals or changing lanes largely fall to the driver, as does scanning for hazards. The driver may have to keep a hand on the wheel as a proxy for paying attention. • Examples: Audi Traffic Jam Assist, Cadillac Super Cruise, Mercedes-Benz Driver Assistance Systems, Tesla Autopilot, Volvo Pilot Assist.

Level 3 _ Conditional Automation
System capability: In the right conditions, the car can manage most aspects of driving, including monitoring the environment. The system prompts the driver to intervene when it encounters a scenario it can’t navigate. • Driver involvement: The driver must be available to take over at any time. • Example: Audi Traffic Jam Pilot.

Level 4 _ High Automation
System capability: The car can operate without human input or oversight but only under select conditions defined by factors such as road type or geographic area. • Driver involvement: In a shared car restricted to a defined area, there may not be any. But in a privately owned Level 4 car, the driver might manage all driving duties on surface streets then become a passenger as the car enters a highway. • Example: Google’s now-defunct Firefly pod-car prototype, which had neither pedals nor a steering wheel and was restricted to a top speed of 25 mph.

Level 5 _ Full Automation
System capability: The driverless car can operate on any road and in any conditions a human driver could negotiate. • Driver involvement: Entering a destination. • Example: None yet, but Waymo—formerly Google’s driverless-car project—is now using a fleet of 600 Chrysler Pacifica hybrids to develop its Level 5 tech for production

The question was asked of me in the previous thread, "If you're comfortable with these systems having a nonzero rate of failure... exactly what type of failures, then, are not newsworthy?" And the answer to that question depends on what level of automation the vehicle is capable of; as well as what the automaker is MARKETING/ADVERTISING/SUGGESTING what it is capable of.

In both of Tesla's noteworthy accidents, it's not so much a failure of the system (it's only Level 2) as it is the operation of the system. And Tesla bears direct responsibility for that in my opinion. Note the names of the other Level 2 systems: Audi Traffic Jam Assist, Cadillac Super Cruise, Mercedes-Benz Driver Assistance Systems, Volvo Pilot Assist. See how ASSIST is prominently featured? The exception is Super Cruise. Which isnote worthy because it won't change lanes (the driver still has to do that), because it is geofenced (only will operate on limited access HD mapped roads [no intersections allowed]), and has robust, ACTIVE driver attention monitoring systems. Any two of those three would have prevented the Tesla from driving into the side of the semi or the median wall. So there's the unaccepable engineering failure. There's what's newsworthy about them. And that's more so the case when Tesla is ostensibly encouraging their systems to be abused in this fashion touting them as self-driving "autopilot" cars. Musk can launch a car into space, but he can't take simple steps to make his systems safer? C'mon, man. Of course, it would be a lot tougher to blow marketing smoke up everyone's ass if he did that.

In this Uber incident, it is not clear what level of autonomy they are trying to accomplish. Though it can be assumed that since having a driver in the car might be viewed as an expense they would like to eliminate, that they are trying for Level 5. Or maybe level 4. I don't think it is unreasonable to expect a Level 4 or 5 vehicle to be able to manage avoiding the things that most attentative, actively-engaged drivers would. So this incident is noteworthy because the SUV failed miserably and what should be something a fully-automated vehicle could manage. This was not something appearing out of nowhere from behind a blind corner or moving erratically or otherwise obscured. This is further a newsworthy failure because these vehicles are being operated in the public sphere with no (effective) safeguards in place. As long as these vehicles are being "tested" on public roads, they should be equipped with the same driver attention monitoring systems as Cadillac's Level 2 systems. And since they aren't, I'd have no problems with Uber and/or the convicted armed robbery felon backup driver being charged with negligent homicide. Once they are functional, and can deal with bikes crossing the road and kids running behind a mailbox for 0.25 seconds before emerging in from of them, then set them loose.

RE: Self Driving Uber Fatality - Thread II

The Google car that was hit by a bus was stopped and trying to avoid a 3-4 inch tall sand filled sock that was to divert silt from a storm drain.

RE: Self Driving Uber Fatality - Thread II

I don't think I said it couldn't. Assuming that the lidar is probably the must useful, overall, of the sensors we could have, it needs to detect potential obstacles out past about 365 ft, giving the system about 0.5 seconds of reaction time, at a maximum speed of 80 mph.

But, not all obstacles are across the road, per se. The YouTube channel's curb is actually on the side of the road, and people routinely fail to see it until their wheels hit the protrusion. There may be dips or rises in the road that obscure the potential obstacle until you're well past the safe stopping distance, or it might be much narrow, such as the wheel of a small car, which might be about 6 inches tall and 14 inches across.

And if you saw it once, but not later, did it move or did it get obscured by something else in the mean time?

I think there's been much confusion about "classification," because in my mind, anything taller than 4 inches is already a potential hazard. I wouldn't architect a system that would manage to forget or ignore such detections.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Quote (Spartan5)

I think it is important that we be clear to differentiate and be mindful of the different levels of autonomy while we continue this discussion.

I agree it's good to be mindful of exactly what we're talking about.

Quote (Spartan5)

I don't think it is unreasonable to expect a Level 4 or 5 vehicle to be able to manage avoiding the things that most attentative, actively-engaged drivers would

Not at all. I'm convinced, in fact, that for the general public to be vocally accepting of Class 5 vehicles on a large scale, the rate of injury and fatal accidents will have to be MUCH lower than the human-piloted rates at the time Class 5 operation actually becomes a possibility. From the standpoint of convincing the average midwestern dirt farmer that the shiny new robot car is safe, saying "It's just as safe as the median American driving the median car on the median freeway!!!!" won't be anywhere near enough.

That is a PR problem, not an engineering one. I digress.

As far as uber's ultimate goal for the level of automation attained by their equipment- I don't doubt for a single second that their ultimate goal is Class 5 operation. As you've already identified, this requires on-road testing of vehicles fully equipped for Class 5 operation but operating in a tier one or two or more steps down while gathering data. I agree that letting these systems operate without any real safeguarding is a recipe for disaster.

I wouldn't be terribly surprised if Class 2 or 3 or 4 systems wind up with government-mandated 'attentiveness monitoring' systems. Tesla's method of just having to have your hand on the wheel isn't very robust.

RE: Self Driving Uber Fatality - Thread II

"I wouldn't architect a system that would manage to forget or ignore such detections."

You can't track an "something" until you've determined it's "something" that needs to be tracked. You can call it classification or something else, but determining there is "something" out there that is of concern is the first step.

To make the load on the system easier, the designers certainly are deciding to ignore certain data once it's been deemed unimportant to the task of getting the car safely down the road. That way, they have enough processing power to process the important data.

RE: Self Driving Uber Fatality - Thread II

(OP)
Spartan5,

I don't see those six levels in an emergency. Car accidents happen within a couple of seconds. There is not time for a safety driver to put down a book and scan out the window and over the instruments to figure out what is happening. In my scenario above, you are not much more than two seconds away from killing a small child. Either the car is a full-time robot, or the human is full-time in charge and responsible.

--
JHG

RE: Self Driving Uber Fatality - Thread II

I would expect there is a very different public acceptance level between a person getting hit after they walked across 40' of open pavement vs a person getting hit after they jumped right into the bumper of a car from behind a big solid object. After the public sees a few accidents that appear to be very simple to avoid, they don't have a very warm fuzzy feeling about the current systems.

RE: Self Driving Uber Fatality - Thread II

"To make the load on the system easier, the designers certainly are deciding to ignore certain data once it's been deemed unimportant to the task of getting the car safely down the road. "

That's not been proven or even likely in the collision in question.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Quote (IRstuff)

That's not been proven or even likely in the collision in question.

This is absolutely what happens. If the system tried to plot trajectories for every object detected in range to the same level of fidelity, there wouldn't be nearly enough processing power to predict everything.

Objects which are close, large, and moving fast are at the top of the list with regard to fidelity of path estimation 'requested' from the processing stage.

I am convinced that both the Uber and Tesla failures under primary discussion here were the result of failures of the software to correctly prioritize predictions, not failures of the hardware to detect objects.

This is why the deep dive into the meaning of the word 'classification' in the last thread, which I'd rather not revisit here.

RE: Self Driving Uber Fatality - Thread II

"Either the car is a full-time robot, or the human is full-time in charge and responsible."

That is true not only from the perspective of safe operation, but from a liability standpoint as well. Either the driver is responsible for being continuously aware of potential dangers and reacting to them at a moment's notice, or the car is. The cutoff for me is the point where the car is steering itself, because the driver is no longer engaged in operating the vehicle. Emergency braking, lane departure warnings, etc. are all great driver assistance features, but once the car is doing the driving, it has to be able to do it all as well as a human driver who is capable and attentive. The technology is nowhere close to that now, and I have my doubts about it being there anytime in the near future. I know I may sound like a nut when I say this, but as I said before, an AI that sophisticated poses a greater danger to humanity than a few car wrecks.

RE: Self Driving Uber Fatality - Thread II

Quote (jgKRI)

If the system tried to plot trajectories for every object detected in range to the same level of fidelity, there wouldn't be nearly enough processing power to predict everything.

If a system can handle a very busy street, then it should have enough processing power that it can pay attention to a single pedestrian on an otherwise deserted street.

You may recall that even the Apollo 11 LM computer in 1969 did an excellent job in prioritizing.

It'd be a design flaw if it was overzealous in ignoring the one and only moving object about to intersect, given that it should have had not much else to do.

This wasn't Shibuya Crossing in Tokyo.



RE: Self Driving Uber Fatality - Thread II

Quote (drawoh)

Either the car is a full-time robot, or the human is full-time in charge and responsible.

I would disagree... I think you can have a balance of the 'best of both worlds'. With driver assist I can see a driver being lulled into a false sense of security; Under all conditions the driver must be attentive and under control.

Dik

RE: Self Driving Uber Fatality - Thread II

Quote (HotRod10)

know I may sound like a nut when I say this, but as I said before, an AI that sophisticated poses a greater danger to humanity than a few car wrecks.
I tend to agree... not with you sounding like a nut, but with the danger.
Speaking of nuts, I think feasible AV technology needs a playing field that resembles Disneyland's Autopia. Successfully, with high probability tackling the entire real world of possibilities that a free roaming automobile might encounter, and not excluding nefarious traps and ruses that might be staged in the path of an AV, is a pretty big nut to try to crack. Bigger, probably, than is cost-effective with today's and foreseeable technology. And that brings us back to HotRod10's concern.

"Schiefgehen wird, was schiefgehen kann" - das Murphygesetz

RE: Self Driving Uber Fatality - Thread II

Quote (VE1BLL)

If a system can handle a very busy street, then it should have enough processing power that it can pay attention to a single pedestrian on an otherwise deserted street.

I agree. My post was an explanation of how these systems work, not an explanation of a failure mode that lead to the Uber accident.

Quote (VE1BLL)

It'd be a design flaw if it was overzealous in ignoring the one and only moving object about to intersect, given that it should have had not much else to do.

I'd agree- and this type of flaw appears to be what all the fuss is about. And rightfully so.

RE: Self Driving Uber Fatality - Thread II

"That's not been proven or even likely in the collision in question."

Sure it's not proven, and the cause will likely never will be publicly released unless the NTSB forces it out of Uber.

But, if the system had decided there was something in the data which did indicate an object existed which was important and should be tracked then it would have been tracking said object. If said object was being tracked as travelling into it's path that would have led to the system doing something to try and avoid said object. So, my expectation is that the wrong classification of the data is exactly what happened. My guess would be that the data representing the woman was included as being part of the data representing the background vegetation. That type of data would be filtered out as data that can be ignored since data from background vegetation isn't a concern when the task is to drive a car down a street. Or as I put it way back, the AI probably decided she was a bush and bushes can be safely ignored.

I'd think it's way less likely for the system to have processed the data and determined there was an object in front of the car yet done nothing about said object being in front of the car.


Now, after saying data from background vegetation is likely being ignored, that does create an interesting question of what would happen when the car approached a big tree limb or something else similar in it's path.

RE: Self Driving Uber Fatality - Thread II

(OP)
I have been working this out in Octave. The car is travelling at 100kph.

CODE --> output

Velocity:  27.8 m/s
Stopping distance:  78.7 m
       FOV across:   698e-3 rad
           FOV up:   175e-3 rad
 Laser pulse rate:  1.00e6 Hz
  LiDAR scan rate:  10.0 Hz
Stopping distance:  78.7 m
        LiDAR FOV:  2.21e-3 rad
       LiDAR scan:  632x158 = 99856
  LiDAR spot size:   174e-3 m
Sanity Check: Laser TOF should be shorter than laser period. 
   Time of flight:   524e-9 s
     Laser Period:  1.00e-6 s
           Factor:  1.91  Okay. 

I am getting a much smaller spot size than I thought. This is good. The attached code is Octave, but it is supposed to execute in MathCAD.

--
JHG

RE: Self Driving Uber Fatality - Thread II

> Even a bush, particularly one that's 4 feet tall can do serious damage to the car, and likely to give the driver whiplash.

> No sane person would assume that a bush that big won't cause damage to the car. Some bushes are designed to hide even more solid things, so again, potential for serious damage to the car

> Bushes don't suddenly appear in the middle of traffic lane, and if they did, they might have safety curbs that could cause damage to the car.

> This bush moved across two traffic lanes in the time the lidar should have been able to detect the target. That potentially implies a bush on a cart, which is a risk for serious damage to the car

> Because of the latency from lidar frame to lidar frame, each frame results in a new detection, so the AI didn't ignore "a" bush, it ignored multiple bushes popping out of the pavement, which implies trapdoors that might cause serious damage to the car.

Conclusion: the AI wanted to collide with the bush because it wanted to hurt itself.


TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

You're free to conclude whatever silly explanation you want

As to your points. If the car had decided the data indicated there was object in front of it then it would have taken avoidance measures. Or working backwards, the car didn't take any avoidance measures so the most likely conclusion is that the sensor data processing algorithm didn't return a result that indicated an object of concern was in front of the car. As you've pointed out already, the sensor data will always returns data from the objects in front of the car (the road itself being the minimum thing), so the processing does have to determine what is of concern not just that there is something there.

There is no reason to expect the processing decision to classify the data as background vegetation would change over time. If the mistake was made once, it can just as easily be made multiple times.

And to go back to the simplistic, if bushes are only programmed as a thing that can be ignored then they will be ignored. It doesn't matter where the bush is, it is a bush and the AI was told to ignore bushes so it does as it was told and ignores bushes.

And the car is not a person, I'd think that fact was well established by now. It can't do anything that it hasn't be programmed to do, or it hasn't been "taught to do" if you would prefer to say that. It doesn't know that bushes could cause serious damage, or that bushes could be on wagons, or that bushes could have curbs around them if it hasn't be programmed to know those things.

Concludion: You're just posting silly crap to be argumentative.

RE: Self Driving Uber Fatality - Thread II

Quote (lionelhutz)

You're free to conclude whatever silly explanation you want

Who are you directing your message to?

Dik

RE: Self Driving Uber Fatality - Thread II

I just noticed that the HDL-64E that was cited in the original thread is from March 2008. The current version of the HDL-64E is here, which states that it can do 1.3 Mpps, with 2.5x better range resolution.

re: Time of flight: 524e-9 s
Laser Period: 1.00e-6 s
Factor: 1.91 Okay.

"Factor" needs to be based on the maximum range, which is 120 m, so TOF is 801 ns. Nevertheless, there's no interference with the next pulse because the PRF of each laser is only about 20 kHz (1.3 MHz/64). Although the top and bottom blocks fire one pulse each simultaneous, there's no interference because the spots are about half the vertical FOV apart.

The HDL-64E's vertical FOV and IFOV are fixed by the optomechanical design. The horizontal angular resolution is dictated by the frame range, which ranges from 1.55 mrad at 5-Hz to 6.19 mrad at 20-Hz frame rate. But, the horizontal IFOV of the receivers need to accommodate the largest angular resolution, so it's at least 6.2 mrad, which only really affects the noise floor of the receiver. The receiver IFOV might be even larger to accommodate the walking of the laser return due to the scan speed of the laser head, although it's possible that the lasers are aligned to the leading edge of the receiver IFOV, so that at max range, the return will end up on trailing edge of the receiver IFOV.

Note that while the horizontal FOV is seemingly programmable, that does not change the firing rate, so all that happens is that the unit does not transmit data from outside of the programmed FOV. Note also, that the HDL-64E does not process the data at all; calibrations and point-cloud processing is done by a user-supplied external processor.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

My point is that bushes can never be allowed to be ignored, because even if they weren't in the current path, they could be later on, just like the car in the adjacent lane can never be ignored, in case you need to perform an emergency maneuver, in which case it would be absurd to have ignored it previously.

I don't think for a second that the processor would ever "ignore" any sizable target, simply because it doesn't ever know enough about what's behind the detected object that's hiding in the shadow of the detected object. To wit, we often see sports teams crashing through sheets of paper, but that's because they have explicit and verified knowledge that there's nothing on the other side of the paper. So, even if the lidar and the object processor detected a large sheet of paper in its path, it cannot ignore it because it can't see behind the paper to the boulder behind it.

If you want a more plausible explanation to believe in, it would be more likely that the processor got confused and placed all the detected objects in the wrong places in its world model. Or, the processor managed to erroneously program the HDL-64E's FOV to not include the front of the vehicle, so that it never received any detections from the lidar at all.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

My suggestion of the reflectance threshold being set too high is not the way things are done? Just checking.

I assume that at some deeper level for an L4 car an integrated 2d world is assembled from the various sensors, and that this integrated picture is what the AV driver actually uses to decide on whether to brake, steer or accelerate. The other way is to build behaviours up, so there's a braking module, and a lane following/changing module, and a speed control module, all working off different sensors as needed, which may be more 'evolutionary' in approach, that perhaps is how Tesla's system works.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread II

"My point is that bushes can never be allowed to be ignored"

What the system should be doing and what it is doing are two different things.

Ignoring the data that appears as vegetation on the sides of the road and continuing to look for other objects on the side of the road can both be accomplished as the data is processed.

If the processor had mistakenly shifted it's "view" of the world then the car would not have been properly driving down the center of the lane. Wrongly programming the LIDAR unit doesn't sound more probable then a data processing error.

RE: Self Driving Uber Fatality - Thread II

Quote (LionelHutz)

As you've pointed out already, the sensor data will always returns data from the objects in front of the car

There's an important distinction to be made here- the hardware will always detect some object in front of the car. But based on how the hardware is calibrated and how the processing is set up, it is possible for the hardware to not detect an object directly in front of the car. As an example- Greg's point about the reflectance threshold of the LIDAR array being set to high, resulting in a low-reflectance object not being passed on to the processor regardless of distance or closing speed.

Depending on exactly how all 3 systems are calibrated, I think it is possible for a 'coffin corner' to exist where certain ambient conditions combined with an object or pedestrian bearing certain characteristics could cause all three systems to fail to correctly determine if the detected object was necessary to track.

I don't think that's what happened here but what I think is, obviously, just conjecture.

RE: Self Driving Uber Fatality - Thread II

Now that I think about it... was she pushing the bike in front of her, with hands on the handlebars?

There are certain objects which will create data that the system will have difficulty resolving into an object at all- someone (IR I think) already mentioned that a bicycle is potentially invisible to LIDAR depending on distance.

If the processing system receives a frame from the sensor array that contains an area of ambiguous data, it is highly likely for there to be a routine which effectively crops out this portion of the frame (by truncating the output of the frequency domain conversion).

This is a necessary function, so that the system doesn't immediately fail if (when), for example, a rain drop hits the optics and causes a blurry spot on the images being processed.

I'm wondering if the front half of the bike which was visible to the sensor array- front wheel/tire, front half of the frame, was detected by the system but not resolvable, leading to the system responding by truncating this object out of each frame as it moved. This would, in turn, not cause the system to 'incorrectly process' the detection data for the bicycle- it would cause the system to not even try.

This failure mode, if realistic, is still the result of system design error by humans. The truncating operation is necessary, but the conditions which cause this truncation, and the width of the window around the ambiguous data to be truncated, are determined by the programmer.

RE: Self Driving Uber Fatality - Thread II

The bike isn't relevant; the pedestrian, by herself, should have presented a more than valid and substantive target. The lidar is specified to have at least 60-m range against pavement with 20% reflectivity. A person, wearing black clothing, should be very visible at 60 ft; the lidar should be able to see reflected signal from a 2% reflective surface.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Quote (IRStuff)

The bike isn't relevant; the pedestrian, by herself, should have presented a more than valid and substantive target. The lidar is specified to have at least 60-m range against pavement with 20% reflectivity. A person, wearing black clothing, should be very visible at 60 ft; the lidar should be able to see reflected signal from a 2% reflective surface.

Unless the bicycle created some ambiguity which in effect 'confused' the processor, and caused the woman to be truncated out because of a wide error clearing window.

Not highly probable, admittedly- but not impossible.

RE: Self Driving Uber Fatality - Thread II

(OP)
IRstuff,

I am not looking at a specific LiDAR. I am describing a generic one appropriate for a robot car. In my model, at 100kph, the robot must collect enough info on an object 80m away in case it must come to a full halt, without being rear-ended.smile

A 360° LiDAR will have poor resolution at speed at critical decision distances. You need an additional forward LiDAR with a limited field of view, and high resolution. The scanner FOV is a function of how you design your scanner.

--
JHG

RE: Self Driving Uber Fatality - Thread II

(OP)
jgKRI,

I was the one who pointed out that the LiDAR would see through the bicycle. It would get a relatively weak return from the bicycle, and then it would see the background behind the bicycle. This is not necessarily a bad thing. A bicycle potentially has a unique signature that tells the AI how fast it is capable of moving.

--
JHG

RE: Self Driving Uber Fatality - Thread II

and, a more sobering thought... They won't be the last fatalities...

Dik

RE: Self Driving Uber Fatality - Thread II

"at 100kph"

In the US, we drive WAY faster winky smile This morning I got buzzed by a motorcycle doing at least 100 MPH

When we worked on OASYS, we only had a forward-looking lidar with a 50-deg x 25-deg FOV. Turning proved quite scary. If all you ever did was straight line travel, side looking wouldn't come up as an issue. But if you decide to slow down and turn, the new detected objects AND the previously detected objects become significant, particularly with regard to low objects, since they tend to fall into the blind zone of the lidar, which is about 9 ft in radius.

In order to make full use of the pulse rate of the lasers in a reduced FOV, you'd have to give up on the 360 scan, which tends to drive the design to a mirror scanner. However, mirror scanning systems tend to be noticeably larger in volume, and you'd need a minimum of two lidars, one on each side of the car.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

(OP)
IRstuff,

Your LiDAR will impose a maximum speed on your car. If the LiDAR cannot see and identify the hazard, it must be moving slowly enough that it can react when the hazard becomes visible. Fog and curved roads all are an issue.

I figure that a 40° system pointed straight forward, plus a 360° system should work. You would need a way to identify things right next to you. I am focused on three-year old kids in front of you. If you are pulling out onto a highway, the things that will hit you are large enough to be detected by a lower resolution LiDAR.

--
JHG

RE: Self Driving Uber Fatality - Thread II

"Your LiDAR will impose a maximum speed on your car. If the LiDAR cannot see and identify the hazard, it must be moving slowly enough that it can react when the hazard becomes visible. Fog and curved roads all are an issue. "

Yes, which is why the maximum detection range needs to be at least 120 m. OASYS had a 400-m detection range for a nominal 120-kt speed, but the Cobra was way more maneuverable than a car in some situations. Fog and curves are separate issues. Lidar will suck in heavy fog, regardless; the maximum range is severely reduced, although an APD-based design, or a longer wavelength, could possibly get you better performance. Otherwise, radar is king in fog. This is why a single sensor modality is suicidal.

Curves are what they are; if there are obstructions, you'll have to slow down, otherwise, biasing the FOV into the curve is the most plausible approach, just like steerable headlights. That was the plan OASYS as well.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Quote (jgKRI)

If the processing system receives a frame from the sensor array that contains an area of ambiguous data, it is highly likely for there to be a routine which effectively crops out this portion of the frame (by truncating the output of the frequency domain conversion).

This is a necessary function, so that the system doesn't immediately fail if (when), for example, a rain drop hits the optics and causes a blurry spot on the images being processed.
This is a fail-SAFE system. One cannot simply ignore a portion of the frame because the data coming back from it is odd. If that was how the system was put together, a simple bit of mud would flumox the entire system and it would drive full force with zero reactions to anything... "If I can't see it, I can pretend there's nothing ever there!". Nope, not gonna happen.

Dan - Owner
http://www.Hi-TecDesigns.com

RE: Self Driving Uber Fatality - Thread II

It's not yet clear whether all the required sensors and modalities have been fully fleshed out. Cameras have way more resolution than lidars ever will, and they can potentially see through car windows for lurkers. We'd do the same, if we could afford the distractions, which we can't, so we don't, and we take the risk of a pop-up target. I could sort of imagine wanting lidars mounted under the front bumpers so that they could see under parked cars; that would possibly provide some ability to see behind the shadows cast by the parked cars in the lidar and camera.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Quote (MacGyverS2000)

One cannot simply ignore a portion of the frame because the data coming back from it is odd. If that was how the system was put together, a simple bit of mud would flumox the entire system

On the contrary- under certain conditions one must ignore portions of the frame. This is mandatory for any system which uses optics of any kind- whether they be LIDAR, visual range, infrared, whatever.

It is impossible to design a system in which the optics will never, under any circumstances, be fouled. It is completely impossible.

If the optics are fouled, the system must be capable of handling that condition, period.

Trimming frames is absolutely mandatory. Determining exactly what conditions should be allowed to trigger the frame trimming operation is very, very delicate, as frame trimming COULD result in this type of incident under very specific conditions.

As another disclaimer- I do not know with any certainty that this type of image processing error was the root cause of the Uber accident. It is just one of many possibilities.

Quote (MacGyverS2000)

and it would drive full force with zero reactions to anything... "If I can't see it, I can pretend there's nothing ever there!". Nope, not gonna happen.

In any class of autonomous vehicle operating, there will always be some set of conditions in which the vehicle will cease to operate.

Below Class 5, this means detecting anomalies which prevent reliable system operation, and returning control to the driver as immediately as possible.

IF an image processing error due to ambiguous data was the route cause of one of these incidents, than both design teams are on the hook for not building the architecture to return control to the driver under these conditions.

Once again, this is purely speculative.

RE: Self Driving Uber Fatality - Thread II

In the case of the Velodyne design, fouling of any sort pretty much wipes out complete sectors of lidar FOV. Unlike the camera, it's nearly impossible for fouling to generate false returns, although I've been wonder how small raindrops on the exterior of the Velodyne system will affect its line of sight (LOS) performance; it counts solely on centrifugal forces to shed rain from its apertures. In SoCal, I guess it'll force me to wash the car regularly, or at least, the lidar.

Certainly, not getting any returns is an indication of sensor misbehavior. Another thing that might be lacking is intensity returns. The Velodyne only outputs range; target intensity could provide useful information for processing targets.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

The Velodyne returns intensity.

"Additionally, state-of-the-art signal processing and waveform analysis are employed to provide high accuracy, extended distance sensing and intensity data."

RE: Self Driving Uber Fatality - Thread II

And another report of a self driving car hitting a median wall. This time it appears it lost track of the lane markings.

RE: Self Driving Uber Fatality - Thread II

Missed this one of a Tesla driving into a median wall from last year as well (gif/video):
https://boygeniusreport.files.wordpress.com/2017/0...

https://electrek.co/2017/03/02/tesla-autopilot-cra...

The solution to these Level 2 autonomy problems seems very straightforward as Cadillac has demonstrated. It's as simple as active-driver monitoring and geofences to only allow system activation in well-mapped and controlled access environments.




Seems pretty stupid for a smart car.

RE: Self Driving Uber Fatality - Thread II

I have been following the progress of self driving cars since the DARPA challenge DARPA Grand Challenge in 2004. It's interesting because it's the first widespread application of AI in a largely uncontrolled real world environment. Road vehicle fatalities are a big killer, and anything to reduce them should be encouraged. But the question is will people accept fatalities as a result of self driving cars, even if it reduces the overall rate? Computers might not make the same dumb mistakes people do, but will make dumb mistakes computers make.

We inevitably hear more about failures than successes. I don't have any stats, but this video shows a few of the times Tesla's have managed to avoid accidents. Tesla Autopilot Predicts Crash Compilation 2

I'm a bit dubious that self driving cars will ever reach Level 5, without changes to road infrastructure to support them. Unfortunately with self-driving cars, they can't be trained in a simulator, they have to be trained in the real world. At the moment, public opinion and authorities appear to favor their development. As the limitations of the software become more apparent, the tide of opinion may turn against experimental software being tested in the field when it has fatal consequences.

RE: Self Driving Uber Fatality - Thread II

The Dallas incident is similar to the San Jose one, where the Tesla seems to mindlessly follows the lane markings, without doing much in the way of even paying attention to the fact that the right lane marking either disappears or diverges to the right.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Maybe something, or not, but it seems where ever I have traveled, the people tend to have a different set of bad driving habits. Thus CA drivers tend to have a set of bad driving habits, then that of TX.

So it may be that self driving cars may have a different set of bad driving habits depending on the set of programmers.

The trick maybe to know the expected range of bad driving, bad pedestrian habits. That might require a study that people will say is a waste of money, and will not believe some of the results (people don't like to hear bad things they do).

The results might work out that bad habits will be punished in the future as deaths, or increased accidents.

So will the insurance of AI skills be rated on the results of the long time driving tests? Or by the car type and brand?

RE: Self Driving Uber Fatality - Thread II

Quote (IRstuff)

The Dallas incident is similar to the San Jose one, where the Tesla seems to mindlessly follows the lane markings, without doing much in the way of even paying attention to the fact that the right lane marking either disappears or diverges to the right.

The Tesla software supports the case where there is a lane marking visible on only one side, the lack of a second lane marking isn't necessarily an error.

In the case of lane markings diverging, how does the software know _which_ lane marking is the "right" one to follow? Either one could be incorrect. Tesla explained that collision avoidance is not signaled to the driver if the driver can safely steer away from it, but that does not appear to be true based on the youtube video.

I believe in the Uber case, the safety drivers do two jobs: safety, and performance monitoring of the software and noting discrepancies for later analysis. Uber user to have two persons per car, they decided that the safety driver could both roles. Obviously, that is cheaper.

There are several problems with partial automation, where the user still has to monitor the computer, such as automation dependence. Inattention is prime cause of road accidents, now we are asking people to pay attention to the computer decisions (or non-decisions), which has already shown to be a human weakness. That doesn't add up.

In particular, the ability of the human to understand in all cases what the computer is doing is very poor. I also follow aviation accidents, and while automation probably saves a lot of accidents (there is no hard data on that), there are many accidents that occur due to the human/computer interface becoming decoupled, when if the pilots had simply turned of the autopilot and flown the plane manually, no accident would have occurred.

It's also worth noting passenger jet autopilots do not do obstacle avoidance, but there are TAWS and TCAS systems which do terrain and traffic detection but they only provide advisory alerts to the pilots.

RE: Self Driving Uber Fatality - Thread II

If they keep running into things they shouldn't, they'll have to change the name to Auto Pile-it *RIMSHOT*

RE: Self Driving Uber Fatality - Thread II

"when if the pilots had simply turned of the autopilot and flown the plane manually, no accident would have occurred."

In many other cases, the warning systems engaged, but the pilot was so engrossed with an irrelevant input that the planes crashed. There was one where the copilot and navigator both recognized the stall warning, but were apparently afraid to prompt the pilot, and the plane crashed.

Tesla clearly screwed the pooch by naming it the way they did. Some idiot, probably Musk himself, approved that name. That was a disaster waiting to happen.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

There's also going to be the case where these types of systems have fewer and fewer incidents that become worse and worse. If they eliminate all of the fender benders but don't do well beyond that the number of incidents will drop dramatically and the severity will raise significantly. In that way the AI systems will always look worse than human drivers at first glance.

RE: Self Driving Uber Fatality - Thread II

(OP)

Quote (microwizard)


...

There are several problems with partial automation, where the user still has to monitor the computer, such as automation dependence. Inattention is prime cause of road accidents, now we are asking people to pay attention to the computer decisions (or non-decisions), which has already shown to be a human weakness. That doesn't add up.

In my child-by-the-side-of-the road scenario, above, the car will decelerate at a half G to a safe halt in 5.66s. If the car does not react and is headed for the kid, impact will be in half that time, 2.83s. If you are sitting in the driver's seat, paying attention, it will take you something like 3/4s to react. Luckily, this provides time for a successful panic stop. If you are not paying attention, it will take you several seconds just to figure out what is happening. If you are quick, you will know what it was you hit, like that lady in the Uber car. Otherwise, you will be wondering what that loud thump was. This is why texting and driving is dangerous.

Aircraft accidents do not happen as fast as road accidents. A safety driver is of no use if they are not holding the control and are 100% paying attention.

--
JHG

RE: Self Driving Uber Fatality - Thread II

A pedestrian can step off the sidewalk much faster than any car can stop, even from normal urban or suburban speeds.

RE: Self Driving Uber Fatality - Thread II

At what point will we humans just be in the way?

If we just have most things delivered and we work at home, there would be fewer accidents.

RE: Self Driving Uber Fatality - Thread II

(OP)

Quote (cranky108)



At what point will we humans just be in the way?

That depends on how smart the robots are.

--
JHG

RE: Self Driving Uber Fatality - Thread II

Some people seem to think I'm in the way already :)

The problem with sloppy work is that the supply FAR EXCEEDS the demand

RE: Self Driving Uber Fatality - Thread II

Well, now that brings me back to my earlier post, of how do we know this wasn't intentional on the part of the autonomous car?

RE: Self Driving Uber Fatality - Thread II

The rise of the machines?

"Schiefgehen wird, was schiefgehen kann" - das Murphygesetz

RE: Self Driving Uber Fatality - Thread II

Quote (JStephen)

...how do we know this wasn't intentional on the part of the autonomous car?

Old rule of thumb, "Never attribute to malice that which can be adequately explained by stupidity."

smile

RE: Self Driving Uber Fatality - Thread II

Maybe we need a special forum just for Bench Racing.

Mike Halloran
Pembroke Pines, FL, USA

RE: Self Driving Uber Fatality - Thread II

The posting was well crafted, and I assume, not fake news... too much at stake. If the NTSB is critical of Tesla... I can envision a court case to follow... also, too much at stake. Darwin takes another...

Dik

RE: Self Driving Uber Fatality - Thread II

Allowing 6 seconds of no hands on the wheel is a very long time and very far distance travelled for a system that needs hands on the wheel at all times.

RE: Self Driving Uber Fatality - Thread II

...about 175 yards at 60 mph... a reasonably long distance...

Dik

RE: Self Driving Uber Fatality - Thread II

“Today, Tesla withdrew from the party agreement with the NTSB because it requires that we not release information about Autopilot to the public, a requirement which we believe fundamentally affects public safety negatively."

Perhaps it is just me looking at this incorrectly but I have no need to know anything about the Tesla Autopilot and I certainly do not stay on top of any news release from Mr. Musk about his cars and their features. However, I would like to have safe cars on the road and it is becoming rather obvious that there are issues with the Autopilot.

I think it would be appropriate to require that the CEO is on board his vehicle when it is subjected to independent testing on a testing site specifically designed to test the the self-driving features of that company's auto. What is happening today is running tests of poorly designed software and hardware on public road with ordinary people as guinea pigs. If nothing else, it is unethical.

I think there is an issue with the thought process in the companies that develop these system. They are software companies who appears to have the philosophy that if something is not right, you issue a patch on Tuesday and then all is good. All I can say is that is not how things are done in the chemical engineering world.

RE: Self Driving Uber Fatality - Thread II

j_larsen,
I agree that all snake oil salesmen should test their products on themselves.

RE: Self Driving Uber Fatality - Thread II

"I think it would be appropriate to require that the CEO is on board his vehicle when it is subjected to independent testing on a testing site specifically designed to test the the self-driving features of that company's auto. What is happening today is running tests of poorly designed software and hardware on public road with ordinary people as guinea pigs. If nothing else, it is unethical. "

Musk and Tesla point out that their Autopilot is NOT for autonomous driving. In fact, and in history, the driver needs to be FULLY engaged and PAYING ATTENTION. In every case, the driver was NOT paying attention, and there are those that go out of their way to disable/defeat the car's mechanisms for forcing the driver to stay engaged. The fatality in Mountain View was on a pathological part of the freeway involved in roadwork, and normal drivers were having problems with the lane divergence, and the Tesla did warn the driver, but it's likely that the few seconds of warning was insufficient for the driver to re-engage and figure what they needed to do, or they were so distracted that they didn't even pay attention to the warnings.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

"...the driver needs to be FULLY engaged and PAYING ATTENTION. In every case, the driver was NOT paying attention..."

Expecting a driver who isn't driving to "be FULLY engaged and PAYING ATTENTION" is far too much to ask. One of main causes of wrecks now is drivers who are supposed to be driving who aren't paying attention.

RE: Self Driving Uber Fatality - Thread II

Tesla's problems are precisely why the big boys won't bother with L2 or L3 AVs in the field. Expecting human intervention /as a situation develops in seconds/ runs counter to the way real people use 'autopilot' like features. I must admit I wish Tesla had its own thread on this rather than messing up the Uber one.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread II

I have some sympathy for Tesla here. By the time due process has taken its time to do all the stuff it needs to do before it can issue a definitive statement exonerating the vehicle completely (let's, just for the sake of argument, assume that the car was blameless) and Tesla is released from their bond of silence, the company is quite likely to have collapsed under the weight of unrefuted rumour. Breaking cover, getting themselves chucked off the investigation, but being able to be part of the open discussion is probably the only choice they had left.

CEOs riding the tests? Wasn't that how the Verrückt testing was done (see other thread)?

A.

RE: Self Driving Uber Fatality - Thread II

zeusfaber,

The CEO is not to design the test. Others will determine the testing to be done to fully challenge the design.

The issue with Tesla I have is with how the say that the driver has to be fully engaged at all times. It is simply impossible for anyone to within a second or so figure out what the Autopilot couldn't figure out and then take corrective action.

RE: Self Driving Uber Fatality - Thread II

"The issue with Tesla I have is with how the say that the driver has to be fully engaged at all times. It is simply impossible for anyone to within a second or so figure out what the Autopilot couldn't figure out and then take corrective action.

If you are fully engaged, then you will see the car in front of you swerving to avoid hitting the barrier, which your Tesla detected, but doesn't seem to know what to do about it. The issue is that when things are going well the temptation to look at other things is too high.


I think "Autopilot" was an an engineering disaster, in that it should have never been named that, because crap like these accidents will continue to happen for a long time to come. It took the image recognition community 40 years to get to the point of detecting and identifying objects, and even then the performance is nowhere close to perfect. Likewise, people have worked on collision avoidance for nearly 60 years, and it's still not ready for the real world. We keep trying and we keep adding sensors, and some things work really well, but others, not so much.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Quote:

CEOs riding the tests? Wasn't that how the Verrückt testing was done (see other thread)?

Igor Sikorsky is likely the last CEO to have tested his own stuff.

Mike Halloran
Pembroke Pines, FL, USA

RE: Self Driving Uber Fatality - Thread II

Quote (Derek L Schmidt, Attorney General, State of Kansas)

Investigators obtained video recordings of HENRY and SCHOOLEY in a raft going airborne during their personal test run on the Verrückt prototype.

(here, p10)

RE: Self Driving Uber Fatality - Thread II

This Autopilot is just the tip of the 'berg. As it improves, there is a financial viability that drivers will become redundant and there will be fleets of driverless vehicles and laws will be passed to allow this. Just a matter of time, my friends.

Dik

RE: Self Driving Uber Fatality - Thread II

Not in my time, I hope.

RE: Self Driving Uber Fatality - Thread II

I was hoping you would be around for a week or so... Because of the economics of not having to hire a driver, this will become the mode in a very short period of time. When it comes to safety over profit... you know how most western governments work.

Dik

RE: Self Driving Uber Fatality - Thread II

A lot of people assert that autonomous vehicles which only have a few mishaps is preferable to human operated vehicles which have a lot of mishaps because of operator error. I don't subscribe to that philosophy.

RE: Self Driving Uber Fatality - Thread II

A problem with having a computer run an vehicle is clearly that it has to be programmed to deal with every foreseeable situation. I doubt there is an interest in doing that and dik's comment above gets to the bottom up that. In chemical plants we have far fewer variables to deal with and even there it isn't easy to get all situations covered.

An example: A couple of months ago I was about to tun right at an intersection onto a two lane road. An 18-wheeler from the opposite direction was turning left and decided to give him both lanes for his turn (I know that is unusual in Houston traffic). Then, while turning, a large metal saw horse looking structure fell off the trailer and located itself in the intersection. I managed to turn right next to it and catch up with the driver, get his attention and I pulled in front of him and stopped. Walked up to the cab and explained what happened. He walked back to the intersection to look at it and I drove on home.

Surely stuff can fall off trailers driven by computers but how do you alert them to such problems.

I suspect that there are drivers who will outperform a computer when situations are "non-standard". There are also a lot of bad drivers out there. We don't do a good enough job of dealing with bad drivers.


RE: Self Driving Uber Fatality - Thread II

J, I always wondered how a muffler would find its way into the middle of the road. But can self driving cars lissen for problems? Like a dragging muffler, or squealing breaks, or the fire truck behind it? Can it ignore the sound of the dry bearings in the AC fan motor, or that hard rock station someone just turned on? Does it know what those flashing lights are right behind it? Or what about that smell that may be an engine fire?

There's more than just self driving. There is being aware of what is going on around.

As a friends dad once said, 'that flat tire isn't going to change itself'.

RE: Self Driving Uber Fatality - Thread II

Actually, one of the mufflers is mine. I bought a car, after freshman year finals in college from an upper classman, and on the way home for the summer, the muffler just fell off. I noticed the change in sound, but didn't know enough about what have caused it, the car still ran fine, and I had just finished several days of all nighters, etc. The end result is that I didn't stop until I was about 200 miles up the road, and only then noticed that the muffler was gone.

It certainly would be good if the sensors can detect such things, but that's almost a separate layer of requirements.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Deaf people drive cars, and the inability to hear such things is a problem for them. For their benefit and that of hearing but unsophisticated drivers, I've thought that adding an audio analysis capability would do more to alert them to system failures long before they became serious problems. It's often the case when driving that I've wondered if some new noise was one that I just hadn't noticed or if it was getting worse.

Sadly, many defects that affect how cars operate don't make noise prior to failure - like on my car where the engine dropped dead on the highway two lanes from the shoulder in rush hour traffic; I managed to coast it, dodging other traffic. Sadly it was covered under warranty so I didn't get any notice about what caused it, but I'm pretty sure it's the same thing that killed it again out of warranty a year later - corrosion under the ignition module where a two-letter maker thought using dissimilar metals was a good idea.

RE: Self Driving Uber Fatality - Thread II

Speaking of which, I've read that Marlee Matlin brought her car to a mechanic because the battery kept running down.

His diagnosis was instantaneous. The horn button was stuck on.

How much would you pay to provide a supervisory listening device with AI on every car?

Uncle Sam doesn't care what you think; if those idiots in DC decide you need it, it will be mandatory, no matter the cost to you.

... As has already happened.

There's no putting the sh!t back in that horse.

Mike Halloran
Pembroke Pines, FL, USA

RE: Self Driving Uber Fatality - Thread II

Given my luck with human mechanics doing more damage than they repair, I'll take a shot with AI. At least the AI won't knowingly lie to my face.

RE: Self Driving Uber Fatality - Thread II

AI has no 'knowing,' which is the point. If I fix the AI it won't repeat the mistake. If I point to the mechanic where he did damage, he laughs in my face and says it was like that, fresh, bright metal scrapes and all.

The Tesla (this thread keeps wandering) AI kept demanding the driver to take control, which the driver failed to do.

RE: Self Driving Uber Fatality - Thread II

The Uber AI did not demand the driver to take over before it failed ... unless it did so in the last second before the crash.

RE: Self Driving Uber Fatality - Thread II

(OP)
3DDave,

In my scenario above, if you and the robot do not hit the brakes, you will reach the child in 2.8 seconds. If you are not paying attention and the robot signals that you must take control, I figure that you have one second to figure out what is happening and decide what to do. The reaction time of young humans, 100% driving the car and paying attention, is 0.75 seconds. Try looking at 2.7 second passing by on your watch.

The problem may not be a child on the side of the road, it may be a loss of traction for some reason. I wonder how easily a robot can spot potential black ice.

The mechanic is not relevant. I cannot believe you will be allowed to fix an AI car.

--
JHG

RE: Self Driving Uber Fatality - Thread II

Maybe AI cars will bring back full service gas stations.

RE: Self Driving Uber Fatality - Thread II

BrianPeterson - the thread had shifted to the Tesla crash; sorry for the confusion on the Uber crash.

drawoh - the debugging capability can be added to any car. It doesn't require the ability to steer. As I originally mentioned, this is a terrible problem for deaf people as it is. The same could be extended to household appliances which the deaf also interact with. I don't see a particular difficulty in using spectral analysis in conjunction with ODBII data to link a mechanical output (sound) to a mechanical input (particular engine rotation position, exhaust valve opening, wheel rpm, fan rpm, et al.) I expect there's insufficient upside for makers to do this as most consumers will ignore anything that doesn't prevent the car from moving.

Also, an autonomous car AI would not detect a collision and then alert the driver to do anything. In the case of the Tesla, the driver had exceeded the hands-off time and was being alerted to pay attention or at least touch the steering wheel. Scant attention is paid to CALTrans failure to replace a one-time-use crushable barrier after it was previously used, failed to add rumble strips to alert non-AI drivers, failed to add no-cross heavy stripemarking, failed to maintain lane striping; all considerations due to the number of times that particular feature had been hit by a non-AI driver.

RE: Self Driving Uber Fatality - Thread II

(OP)

Quote (3DDave)

...

... In the case of the Tesla, the driver had exceeded the hands-off time and was being alerted to pay attention or at least touch the steering wheel.

This comes back to my point that there are not six levels of autonomous control of automobiles. There are two.

  1. The car has no controls other than a keyboard and/or microphone. The robot is in control.
    The robot knows how to find the passenger's destination. The robot is able to dodge other vehicles, bicycles, pedestrians, children, pets, tree branches, Bambi and Bullwinkle. Unless the robot operates in a restricted environment, there will always be uncontrollable, unpredictable agents on the road that must not be hit. If the robot causes an accident, the manufacturer is responsible, which is why I think robot cars will be a service, rather than a consumer possession.
  2. The human is in control, operating the steering wheel and controlling the accelerator and brake. If there is an accident, the human is responsible. The robot is a back seat driver, able to ring (buzz?) alarms and jiggle controls.
If there is any emergency in which the human must take control, they must be looking out at the road, gripping the steering wheel, and having access to the accelerator and brakes. The reaction time is no more than a second.

--
JHG

RE: Self Driving Uber Fatality - Thread II

Quote (drawoh)

This comes back to my point that there are not six levels of autonomous control of automobiles. There are two.
I tend to agree.

"Schiefgehen wird, was schiefgehen kann" - das Murphygesetz

RE: Self Driving Uber Fatality - Thread II

Quote (drawoh)

This comes back to my point that there are not six levels of autonomous control of automobiles. There are two.
...
...and jiggle controls.

I agree as well. It seems like everyone is very eager about the first type of vehicle (fully autonomous), but the "jiggle controls" part is very valuable - the computer is much better at bringing car back into normal driving mode after a slip than a human.

A basic "drive-by-wire" system should not look like something that can drive car on its own... but it should provide active help in avoiding sudden obstacles (particularly make a decision about going around an obstacle with regards to cars that may be approaching at higher speeds from behind where driver does not have full attention). Plus it could compensate for much erroneous input that would otherwise cause a slip in bad conditions.

RE: Self Driving Uber Fatality - Thread II

drawoh, I think you are on the right track, but your option (i) is perhaps overegging the omelette. The AV doesn't have to know how to deal with every situation and treat it like a good driver would, it merely has to drive within its limitations. If it comes to a situation that it can't cope with then it should /gracefully/ take an alternative safe course of action, for instance park at the side of the road and phone for help.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread II

(OP)
GregLocock,

I agree. If you are responsible for your robot not causing accident, keeping it well within its limits is one of your strategies. Shutting down on a country road way out in the middle of now where sounds like a disaster. Choosing to not operate at all in adverse conditions like ice storms is a better idea.

--
JHG

RE: Self Driving Uber Fatality - Thread II

While "Operating at frequencies outside the visible spectrum, LiDAR should have given the Uber vehicle a full “360 degree 3 dimensional scan of the environment,”" isn't technically incorrect, most people would use "wavelengths," instead of "frequencies" as light is more typically referred to as wavelengths.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

I have read all the posts on why lidar and the software from everyone my problem is a simple one every new component fitted to a motor vehicle that is released to the public has been tested in the military or in a motor-racing series ABS, traction control, fuel injection safety cells if the race teams with their budgets and their safety protocols will not have it why should the public look it would save Mercedes Benz $50m that they pay Louis Hamilton per year. Have the cars racing around the track all wanting to be first and see what happens then? the race track is the best way to test look at the DARPA competition a few years back they have not got any better until the electronics understand death they have no concept of self-preservation and if we give them that concept they will understand what the off switch is computers and all electronics work on garbage in garbage out if they the garbage is not what they are expecting they have no contingencies programmed in the human has over 60 million years of knowledge from the day we are born to keep alive computers do not have that so no self-driving car should be on the road until it has had proper testing in closed enviroments with all and any thing that can happen even with autopilot on aircraft they have a pilot and they are flying in free air nothing to hit at 35,000 feet except Mt Everest ha ha just think how many times have you had to restart your phone? try restarting a car when it freezes heading towards a cliff at 100kph, not for me thank you very much

RE: Self Driving Uber Fatality - Thread II

I fail to see your point. Racing cars don't need AI, so why would they kill their aerodynamics and add weight to vehicles that are scrapping for every fraction of a CD they can carve off a car body and are driving in a single lane that's about 35 ft wide?

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Also... Mt. Everest is only 29,029 feet high. So you're good at 35,000 feet.

RE: Self Driving Uber Fatality - Thread II

clough69, please use proper punctuation (commas are missing all over the place). I can't tell where one thought ends and the next begins... makes it very difficult to read.

Dan - Owner
http://www.Hi-TecDesigns.com

RE: Self Driving Uber Fatality - Thread II

(OP)
IRstuff,

If the car has LiDAR and a robot, you can save 100-250lb of the driver.smile

The only problem I see with LiDAR in car racing is it that its limited range limits your speed. Otherwise, you have a controlled environment, lacking bicycles, children and Bambi and Bullwinkle. If all the cars are robots, there are no ethical issues about hitting stuff. Battlebots could be a whole lot of fun, at least until the robot overlords take over and accuse us of murder.

The big problem facing a robot car is traffic. Traffic is unpredictable. Robot traffic will provide all sorts of non-specular reflection of LiDAR wavelengths that your LiDAR will have to filter out.

--
JHG

RE: Self Driving Uber Fatality - Thread II

The whole point of human drivers in car racing is the humans. Otherwise, it's less entertaining than a video game movie.

"Robot traffic will provide all sorts of non-specular reflection of LiDAR wavelengths that your LiDAR will have to filter out"

Solutions already exist for things like that; that's how rolling code garage door openers came to be.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

"Robot traffic will provide all sorts of non-specular reflection of LiDAR wavelengths that your LiDAR will have to filter out"

"Solutions already exist for things like that; that's how rolling code garage door openers came to be."

Not really the same thing. LIDAR uses millions of laser beam pulses bouncing back from objects. What happens when there are several LIDAR systems operating in the same area, all bouncing laser beams off of their surroundings? How does each LIDAR sort out its laser beam reflections from the others?

RE: Self Driving Uber Fatality - Thread II

The main factor in lidar is the narrow field of view of the receivers and the narrow time they are expecting to have a signal return. There is a similar problem in flash photography where a crowd is taking pictures. It is possible to have two or more flashes go off during the exposure, but it is relatively rare and subsequent images aren't likely to have the same problem; this is the case where the exposure was 1/60th to 1/30th of a second. The duration of exposure for lidar is much lower.

One of the bigger problems is for microwave systems that have much larger viewing angles.

I suppose the ultimate answer will be what Ethernet did for communications over co-ax. Each Ethernet adapter would listen for any ongoing traffic and would start transmitting when it was clear. If two adapters started at the same time the conflict would be detected and they would stop for a random interval before retrying. This might be milliseconds of delay, so just a few inches of vehicle travel.

RE: Self Driving Uber Fatality - Thread II

The field of view of the receivers are sized specifically for the ranges and scan rates on something like the HDL64, so it's on the order of milliradians. Interference requires another laser hitting essentially the exact same spot that your laser hit at exactly (within TWO microseconds) the same time. Despite claims of dozens of lidars interfering, realistically, it'll be on the order of 6. In, say, bumper to bumper traffic, distances are reduced, and masking by adjacent cars prevent the cars behind you from hitting most things in front of you. In sparse traffic distances are increased, but there are fewer cars emitting.

Moreover, as I stated, there are a variety of existing solutions to co-channel interference, such the pseudo-random number codes used on GPS, or the Tri-Service code used for laser designation systems such as the Apache Target Acquisition and Designation Systems (TADS). As you might imagine, if you fired your Hellfire missile at a target you designated, you wouldn't want it to get distracted by someone else's laser designation for their own Hellfire. Which is why there are hundreds of different codes that Allied forces can use and not interfere with each other. Additionally, the Tri-Service code that the Hellfire uses also has a rolling code that's comparable to garage door opener rolling codes called Pulse Interval Modulation (PIM) codes. There are billions of possible PIM codes. The receiver basically uses a temporal matched filter to look for the code that it's programmed for, and ignores other pulses from other emitters with different codes.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

problem is processing time i can process +/- in .001ms whether I want to live or die an ECU can do it faster but does not know life or death

RE: Self Driving Uber Fatality - Thread II

My son pointed this out...

"After this training period, all of the subjects were asked to make quick decisions in several tasks designed by the researchers. In the tasks, the participants had to look at a screen, analyze what was going on, and answer a simple question about the action in as little time as possible (i.e. whether a clump of erratically moving dots was migrating right or left across the screen on average).

In order to make sure the effect wasn’t limited to just visual perception, the participants were also asked to complete an analogous task that was purely auditory.

The action game players were up to 25 percent faster at coming to a conclusion and answered just as many questions correctly as their strategy game playing peers.

“It’s not the case that the action game players are trigger-happy and less accurate: They are just as accurate and also faster,” says Daphne Bavelier. “Action game players make more correct decisions per unit time. If you are a surgeon or you are in the middle of a battlefield, that can make all the difference.”

The neural simulations shed light on why action gamers have augmented decision making capabilities.

People make decisions based on probabilities that they are constantly calculating and refining in their heads, Bavelier explains. The process is called probabilistic inference."

Dik

RE: Self Driving Uber Fatality - Thread II

Comparing clock rate to brains is a bit of a miss. Brains operate on massive parallel processing by pre-programming general responses. There is a maximum speed at which a response can happen based on an input, but the huge number of parallels can be extremely complex. I've come across articles similar to the gamers; one was that baseball players hit balls more frequently than non-players. Experiments could find no special speed to the way their brains or muscles worked; what made the difference is that the players were predicting where the ball would be based on the kinematics of the pitcher before the ball was released, there being too little time to observe the ball on the way to the mound. A similar experiment was run on chess masters who could memorize board positions given only a glance. It turned out this only worked when the positions were from realizable game states. If the pieces were placed in locations that could not be part of a game (pawns in all four corners, perhaps) the chess masters were just as poor as anyone else. They weren't memorizing the positions; they seemed to recreate the entire match required to get to those positions based on memories formed from the tens of thousands of games already played.

One of the things that brains seem to do is to create massive 3D and 4D simulations. I have from time to time, needed to get a glass I've left on a table but don't care to turn on a light. Even though I've left the room, remembered I wanted the glass, and come back to pitch darkness, I can bring my hand to with 1/4 of an inch centered on the glass. And I can do that because I have a simulation of the entire path I took and the table and the glass and the memory of where I last put the glass down.

Getting a data structure that is suitable for that would go a long way to making robotic driving a reality.

What's amazing is that this process has to be enabled in some pretty small animals. Small birds can build up a 4D simulation to allow them to fly at fair speeds through forest, avoiding trees and vines based on stereoscopic vision. Even wasps and bees have some location memory - for a couple of years I had cicada killers drilling holes in the back yard and if I moved a leaf a few inches from the burrow they would have difficulty finding their latest hole, scanning back and forth to reacquire the position. They did, because leaves move, so they are prepared. (Cicada killers are cool, and completely harmless, but look like they could kill.)

RE: Self Driving Uber Fatality - Thread II

(OP)

Quote (IRstuff)

...

Moreover, as I stated, there are a variety of existing solutions to co-channel interference, such the pseudo-random number codes used on GPS, or the Tri-Service code used for laser designation systems such as the Apache Target Acquisition and Designation Systems (TADS). As you might imagine, if you fired your Hellfire missile at a target you designated, you wouldn't want it to get distracted by someone else's laser designation for their own Hellfire. Which is why there are hundreds of different codes that Allied forces can use and not interfere with each other. Additionally, the Tri-Service code that the Hellfire uses also has a rolling code that's comparable to garage door opener rolling codes called Pulse Interval Modulation (PIM) codes. There are billions of possible PIM codes. The receiver basically uses a temporal matched filter to look for the code that it's programmed for, and ignores other pulses from other emitters with different codes.

If your garage door opener takes a full second to identify your controller, you won't notice. If I am using a laser to paint a target for my Hellfire missile, is there any reason to pulse the laser other than to provide a unique signature.

In my analysis above, the LiDAR is scanning at 10Hz, and the laser is firing at 1.5MHz 1.0MHz. Whatever you do to ID your signal must work in less than a micro-second. A laser that pulses in Giga-Hertz gives you the possibility of a signal, but the back-scatter is not that simple. For example, if the laser hits a bicycle wheel, you will get scatter from the wheel, and from whatever is behind the wheel. Scatter from an angled surface will be blurred a bit. This is okay for determining range. It could be a problem for sorting out Giga-Hertz data.

You are looking at 150,000 spots every scan. Given that number, there is a high probability you will see some laser spots from the car next to you.

--
JHG

RE: Self Driving Uber Fatality - Thread II

The gate-time for a single receiver is 2 microseconds after pulse launch. Everything outside that time period is nonexistent and therefore irrelevant. Each receiver's "spot" is only active for 2 us a maximum of 20 times a second.

Bear in mind that this is not like RF co-channel interference; in order for anything to happen at all, two lidars must hit exactly the same spot within 2 us of each other to have anything happen at all. Assuming the 4E-5 duty cycle as a random probability, the probability of both occurring simultaneously is essentially on the order of 1.6E-9, which means that two systems would have to be scanning the same area for hours to have any realistic probability of interfering.


TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Isn't it amazing that all these FUD ideas still keep cropping up on a professional forum and presented as "OMG I'm the first person to have ever thought of this".

I think that might be my new sig.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread II

Quote (IRS)

the probability of both occurring simultaneously is essentially on the order of 1.6E-9

Not disputing the claim, can you tell me how you arrived at this value? Personally curious... one of my failings since childhood...

Dik

RE: Self Driving Uber Fatality - Thread II

I brute forced it based on the duty cycle squared. The duty cycle is essentially the probability that a single receiver's IFOV is illuminated, so having it illuminated twice will result in that probability squared, roughly. As a practical matter, the probability is not random, so if we assume that all lidars on the road scan the same way, i.e., they are synchronized in scan position, then the probability is either unity or zero. As the duty cycle is quite low, the actual probability is closer to its duty cycle than it is to unity.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Thanks, IRS

Dik

RE: Self Driving Uber Fatality - Thread II

Even the most sophisticated LIDAR does not come close to what the human eye and the human brain can achieve. For example, as I was driving down the interstate a few weeks ago, I saw at a glance a car on a side road about a quarter mile away and knew instantly several things - it was a sedan, it was moving north, it was not going to intersect my path, etc. How long would it take for a LIDAR system to even see it, if it did at all? How long for the processor to figure out what it is, if it even could? Would it know the difference between a human-shaped bush lowing in the wind on the side of the road and a human who is standing still? Our eyes can scan a wide field of view and see and recognize hundreds of objects in that field of view, assessing all of them in a fraction of a second. We are so far beyond anything artificial in not only our ability to scan the environment, but also in our ability to anticipate and extrapolate, that I don't believe a self driving system will ever be as capable as an attentive human driver. As such, I believe a computer driver will always be a less capable driver and a greater danger to everyone. That is certainly the case at the present time.

RE: Self Driving Uber Fatality - Thread II

That's all interesting, but not relevant. The issue is reliability and consistency. Human drivers suck at being consistent; they fall asleep, they randomly decide to step on their brakes, they suddenly swerve across 4 lanes of heavy traffic because they suddenly realize that the offramp is only 1000 ft downrange. And you probably saw that sedan during daylight, but at night or in fog, human vision drastically degrades, while lidar or radar can keep pumping out returns. Moreover, humans just get bored and inattentive, as was the case with the Uber backup driver.

Much of what I see as traffic problems are due completely to just plain bad driving and habits of humans; traffic would move much better if all cars behaved consistently. People drive at random speeds, both individually, and within seconds.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

"I don't believe a self driving system will ever be as capable as an attentive human driver." Correct. On average an AV will probably never match the driving ability of the best drivers and certainly won't match the accident record of the safest drivers. Luckily it doesn't have to. All it has to do is what it has conspicuously failed to do so far, driver better than an average driver and have fewer accidents on average. One debate that needs to happen is how much improvement is needed. Setting silly performance targets won't help.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread II

Quote (GregLocock)

what it has conspicuously failed to do so far, driver better than an average driver and have fewer accidents on average.

If this is your metric, autonomous vehicles are safer by an order of magnitude.

The average American gets into an accident every ~175000 miles.

Uber has accumulated more than 2 million miles of autonomous testing.

There have not been 11 accidents, that we're aware of.

In my opinion whether or not an accident results in a fatality is an extremely high variance event.. meaning that with regard to fatalities specifically, we won't know if autonomous vehicles are 'safer' for a long time, until billions/trillions of miles have been accumulated. And that is a LONG way off.

RE: Self Driving Uber Fatality - Thread II

"Human drivers suck at being consistent;"

Apparently, so do the computers, unless you consider them to be consistently poor. Currently, there have been a shockingly high number of wrecks among the ranks of computer drivers, considering how few total hours they have driving.

"...they randomly decide to step on their brakes, they suddenly swerve across 4 lanes of heavy traffic"

Better than not hitting the brakes when they should and running over pedestrians and swerving into concrete barricades.

"Moreover, humans just get bored and inattentive, as was the case with the Uber backup driver."

Of course they do, especially when they're not actually driving. The "Uber driver" wasn't a driver, he was a passenger.

RE: Self Driving Uber Fatality - Thread II

(OP)
HotRod10,

Look carefully at the numbers I posted above. If a vehicle is travelling at 100kph, a robot can make a fairly aggressive stop in under 80m. This is well inside the range of the LiDAR based on my calculations. The object has been scanned several times, and its velocity and acceleration vectors arew known. I don't think the LiDAR is functional a mile away, but it does not need to be. The LiDAR's range will limit the speed of the car.

If my robot detects an object on the side of the road that is moving out in front of the vehicle, the vehicle must take evasive action. The object can be a child, or a dog, or a garbage can being blown around by the wind. That may be a pile of leaves out there, but your robot does not know what is inside it. Until such time as the robot can reliably identify an undesirable terrorist, there is no excuse for not stopping.

--
JHG

RE: Self Driving Uber Fatality - Thread II

An automated vehicle that swerves or slams on the brakes for a piece of paper blowing in the wind WILL cause a collision sooner or later.

It might be a collision where the automated vehicle is not legally at fault ... but in the real world, the driver (automated or otherwise) who slams on the brakes unexpectedly (to other drivers) in a travel lane of a motorway is the one who actually causes the ensuing collision, even if the one behind (which might be a fully loaded 18 wheeler which cannot stop on a dime) is the one legally at fault.

RE: Self Driving Uber Fatality - Thread II

Strange that no one has spoken about what happens after a self driving car hits something. If that something disables the car, then there is no conversation. But what if it was to hit something that does not disable it? Does it know to stop and call 911, or does it take off, causing a hit and run condition?

There is a difference, because hitting a person and driving off is a crime. Where hitting a bird and driving off is not.

RE: Self Driving Uber Fatality - Thread II

"Uber has accumulated more than 2 million miles of autonomous testing."

Assuming that 2 million miles are real-world miles, the fatality rate for the autonomous Uber cars is still about 50 times the national average human drivers.

"The average American gets into an accident every ~175000 miles."

I'd like to see where you got that number, because according to the NHTSA in the US in 2012, there were 30,800 fatal crashes (33,561 fatalities), 1.634 million injury crashes (2.362 million injuries), and just under 4 million "property damage only" crashes. Total vehicle miles traveled (VMT) - just under 3 billion Link. That's 1 fatality for every 88.5 million VMT, 1 injured person for every 1.25 million VMT, and 1 property damage only crash for every 750,000 VMT. Added together, that's 1 crash for every 467,857 VMT, not 175,000, so unless the crash rate has nearly tripled in the last 6 years, you're way off.

RE: Self Driving Uber Fatality - Thread II

"If my robot detects an object on the side of the road that is moving out in front of the vehicle, the vehicle must take evasive action. The object can be a child, or a dog, or a garbage can being blown around by the wind."

Great! So the car will take evasive action into a vehicle in the adjacent lane because a garbage can blew into the street? That'll be popular. Btw, what happens when the child is standing still on the curb until half a second before your robot drives by and then runs into the street? Does your robot anticipate this completely illogical action as most humans would?

RE: Self Driving Uber Fatality - Thread II

(OP)
HotRod10,

Hitting the brakes is evasive action.

How about approaching a building a meter away from your road, at 100kph, with someone posssibly standing behind it? At some point, the robot cannot see around a corner, and it must slow down just in case.

--
JHG

RE: Self Driving Uber Fatality - Thread II

"...at night or in fog, human vision drastically degrades, while lidar or radar can keep pumping out returns."

Then perhaps, instead of taking the human's ability to assess, anticipate, and extrapolate out of the picture, and try to replace it with a far less advanced computer "brain", we should put our efforts towards supplementing and augmenting the human driver's ability to see at night or in fog.

The problem with the push for self-driving cars is that they are so far from being a solution. OTOH, if the technology was applied to solving the real problems - the limitations of human vision and inattentiveness, then real progress could be made. Some has already been implemented in a few cars - headlights that turn to follow the road, thermal imaging, blind spot sensors, lane departure warnings, etc. If the efforts were aimed towards helping the driver, rather than replacing him, the roads would become safer. Replacing human drivers with machines that are not up to the task, makes the roads more dangerous. Maybe someday autonomous vehicles will be ready to be on the street, but until they are, they shouldn't be let loose on the unsuspecting public.

RE: Self Driving Uber Fatality - Thread II

Quote (HotRod10)

I'd like to see where you got that number, because according to the NHTSA in the US in 2012, there were 30,800 fatal crashes (33,561 fatalities), 1.634 million injury crashes (2.362 million injuries), and just under 4 million "property damage only" crashes. Total vehicle miles traveled (VMT) - just under 3 billion Link. That's 1 fatality for every 88.5 million VMT, 1 injured person for every 1.25 million VMT, and 1 property damage only crash for every 750,000 VMT. Added together, that's 1 crash for every 467,857 VMT, not 175,000, so unless the crash rate has nearly tripled in the last 6 years, you're way off.

Ok, my number was wrong. If we use yours, Uber's cars are still 'safer' by a factor of 4 or 5 instead of 10.

One pedestrian fatality does not a trend make.

If this woman had been injured instead of killed, suddenly the numbers for Uber's development program would be somewhere near the national average (1.25 million vehicle miles per injury) and all the hair pulling would be drastically subdued.

The difference between her being an injury and her being a fatality is a matter of a couple of feet one way or the other- by definition, a high variance event. There isn't anywhere near enough data yet to determine either way whether these vehicles are an improvement, and there won't be for a long, long time.

RE: Self Driving Uber Fatality - Thread II

It appears plausible that the Uber cars do have 2 million "semi-autonomous" miles, but when a human still has to take over to avoid crashing into something once every mile on average (at least it's improving), that doesn't inspire much confidence.

Link

RE: Self Driving Uber Fatality - Thread II

"Uber's cars are still 'safer' by a factor of 4 or 5 instead of 10."

Only if you compare the Uber cars' fatality rate to the overall crash rate, the bulk of which are property damage only (PDO) crashes. We only heard about this one because it resulted in a fatality. How many PDO crashes have Uber cars had? If you can find it, you're better than I. It seems they're being pretty tight-lipped about that. I wonder why?

RE: Self Driving Uber Fatality - Thread II

I fail to understand why pointing out that our data set is incomplete (which it is, you're 100% correct), which is the point I've been trying to make, then warrants support of any conclusion whatsoever.

This is an engineering forum, isn't it?

RE: Self Driving Uber Fatality - Thread II

"This is an engineering forum, isn't it?"

Specifically, it's the "Engineering Failures & Disasters" forum, where we discuss why and how engineering failures and disasters occurred, so as to reach conclusions about how to prevent future failures and further unnecessary loss of human life. At least that's my understanding of the purpose of this particular forum. To that end, I am pointing out that in this case the failure was putting computers behind the wheel of vehicles on public roadways when they were not up to the task. I could be wrong in my assertion that they will never be, but what is clear is that they are not ready yet.

If the object is to improve traffic safety, at this point replacing the human driver does not accomplish that goal. Maybe it will someday, but not now. Of course, the real aim is profit - getting a machine to do something instead of paying a person to do it. Don't get me wrong, I'm all for automating menial tasks, but driving is not a menial task. It can be a boring task, and one prone to distraction, but it is not simple, and the consequences of failing to do it capably can be, and have been, fatal. Experiments that put the public at risk to further technological advancement are irresponsible. Such experiments done for the sake of profit are criminal, just like opening a water slide that injures and decapitates people.

RE: Self Driving Uber Fatality - Thread II

Quote (HotRod10)

at this point replacing the human driver does not accomplish that goal.

This is the entire point- you're making that conclusion, which may or may not be correct, based on incomplete information.

I don't disagree with you on a few of your points but promoting a conclusion which is very possibly incorrect, because it is based on a data set which is massively incomplete, is not something I believe that we, as members of this forum, should promote or even tolerate.

We're all engineers, and we all have the same itch to look at a problem and say "I KNOW WHAT THIS MEANS". I get it. But in this case, we don't. We can't yet. Any conclusion based on what we know today isn't a conclusion, it's a knee-jerk reaction.

Quote (HotRod10)

Experiments that put the public at risk to further technological advancement are irresponsible.

I don't disagree with that sentiment taken at face value..

But in this case, it is not possible for this new technology to be released without extensive testing in and among the general public. The general public represent a considerable portion of the exact hazards which this system is being designed to avoid- and those hazards cannot be simulated.

Outlawing testing of autonomous vehicles on public roads is exactly the same as outlawing their development entirely. They cannot be rigorously tested any other way, and they must be rigorously tested if they are ever to be brought to market.

RE: Self Driving Uber Fatality - Thread II

There are a lot of issues with mixing AVs with HVs (human-driven vehicles), given the unpredictability of the latter. If all vehicles were AVs many of the collisions that have occurred wouldn't have happened, particularly if the AVs eventually get upgraded into a mesh network of communication. Intent and coordination can be transmitted between cars fast enough to avoid issues like a Tesla not seeing a semi trailer, etc.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

"...promoting a conclusion which is very possibly incorrect, because it is based on a data set which is massively incomplete, is not something I believe this forum should promote or even tolerate."

So in the absence of adequate information to make an assessment, we should just let anyone who cares to, send an AV out onto the roadways and hope they don't kill too many people? Sorry, but I disagree. Public sentiment will ultimately decide where the line is drawn, but that is my view on the subject.

As far as whether me expressing my view should be "tolerated", may I remind you that public safety is one of the core principles of good engineering, so discussion of whether having AVs on the road at their current level of sophistication puts public safety at risk, is very much relevant to the topic and consistent with the aims of the forum.

RE: Self Driving Uber Fatality - Thread II

Quote (HotRod10)

So in the absence of adequate information to make an assessment, we should just let anyone who cares to, send an AV out onto the roadways and hope they don't kill too many people? Sorry, but I disagree. Public sentiment will ultimately decide where the line is drawn, but that is my view on the subject.

That is, most certainly, not what I said.

Surely you understand that banning public testing of autonomous vehicles (which is the idea you seem to be supporting. If that's not correct, then correct me) is synonymous with banning them permanently.

There is no other way to test them for public release, than to test them in public. The number of things the system has to interact with are too varied and too nuanced to be accurately repeated in a laboratory, or even in a closed-road environment, to a level where they will be ready for unrestricted use on release day 1.

Quote (HotRod10)

As far as whether me expressing my view should be "tolerated", may I remind you that public safety is one of the core principles of good engineering, so discussion of whether having AVs on the road at their current level of sophistication puts public safety at risk, is very much relevant to the topic and consistent with the aims of the forum.

I couldn't agree more that safety, in the context of whatever particular things we are working on, is absolutely job 1 for all of us.

Making decisions based on incomplete data is rarely a core value of any safety scheme I've ever heard of.

Your opinion is your opinion, and you have every right to have one. You seem to be of the mind that autonomous vehicles capable of real independent operation under all or even most conditions are an impossibility; my own opinion is pretty close to that actually. I'm not advocating that your posts be censored or anything of the sort.. I'm just encouraging you to realize that the data available to you (and me, and everyone else) at this point warrants nothing more than an opinion. Conclusions are a long way off.

But I do think, that in the long term, it is a possibility that this technology can result in safer transportation for all of us. In the short term, that means public testing is a necessity. There's just no way around it.

RE: Self Driving Uber Fatality - Thread II

IRSTuff- thanks for your corrections to the article!

Roopinder Tara
Director of Content
ENGINEERING.com

RE: Self Driving Uber Fatality - Thread II

"Surely you understand that banning public testing of autonomous vehicles (which is the idea you seem to be supporting. If that's not correct, then correct me) is synonymous with banning them permanently."

I don't agree. Certainly it makes it more difficult and expensive to test them in a simulated environment. However, it is not impossible, just not as economical. Public safety demands that AVs, like any other potentially lethal product, be tested as thoroughly as possible BEFORE being released into the public arena.

"Making decisions based on incomplete data is rarely a core value of any safety scheme I've ever heard of."

There is an old saying we still use in our office - "When in doubt, make it stout". In the absence good evidence of the adequacy of a design, good engineering always takes the conservative approach. As engineers, "Making decisions based on incomplete data" is par for the course. If bedrock could be 40 feet down or 50 feet down, we're drilling 60 feet just to be sure the bridge doesn't fall down.

"I'm just encouraging you to realize that the data available to you (and me, and everyone else) at this point warrants nothing more than an opinion. Conclusions are a long way off."

I stated my opinion; my conclusion; my guess of where current path will lead, and the dangers I foresee in following it. I was not attempting to make a definitive statement, or present it as fact. Obviously, no one knows the future, save God Himself.

Rushing an AV out onto the streets, as Uber did, I believe is irresponsible, and it sure didn't help their reputation or public sentiment towards AVs in general. My political views lean towards the libertarian side, but there is a place for regulation and oversight, and I believe this is one area that needs some fairly stringent ones.

"...public testing is a necessity. There's just no way around it."

AFTER extensive and successful testing to the greatest extent possible in a controlled environment is complete, and then only with clearly marked vehicles. Hey, if human student drivers have to be in marked cars, first time AV drivers should too.

RE: Self Driving Uber Fatality - Thread II

Quote (HotRod10)

"This is an engineering forum, isn't it?"

We're pretty loose on definitions... as long as it has technical merit and is interesting, we're pretty flexible.

Dik

RE: Self Driving Uber Fatality - Thread II

Quote (HotRod10)

AFTER extensive and successful testing to the greatest extent possible in a controlled environment is complete,

This is where we are now, basically. These systems all work great on paper.


Quote (HotRod10)

and then only with clearly marked vehicles. Hey, if human student drivers have to be in marked cars, first time AV drivers should too.

How exactly would marking the vehicles have prevented this pedestrian accident, the Tesla concrete barrier accident, the Tesla white trailer incident, or any of the other incidents which have been highly publicized?

I am relatively confident in saying they would not have.

I'll pose the same question I posed earlier in this thread, again:

This knee jerk reaction that many people are having is due to a single pedestrian fatality.

What quantity of fatalities represent an acceptable number to you? If your answer is going to be representative of what will happen in the real world, it cannot be zero. So, how many?

RE: Self Driving Uber Fatality - Thread II

"This is where we are now, basically. These systems all work great on paper."

I'm not talking about on paper, or even lab tests giving the computer "brain" simulated inputs. I'm talking about full-scale, outdoor testing of the whole system in the actual car, driving through a mock-up of a city street with numerous moving objects.

RE: Self Driving Uber Fatality - Thread II

Quote (HotRod10)

I'm not talking about on paper, or even lab tests giving the computer "brain" simulated inputs. I'm talking about full-scale, outdoor testing of the whole system in the actual car, driving through a mock-up of a city street with numerous moving objects.

Dude. That's been done.

Waymo has been working on test tracks, in the exact mockup situation you're describing, for a decade. That work continues.

It isn't enough. The scenario you don't like (that I don't really either), cars not ready for full release being on public roads, is not possible to avoid.

RE: Self Driving Uber Fatality - Thread II

"However, it is not impossible, just not as economical."

The number of road miles required to test such a system cannot be realistically dumped onto one car; it requires hundreds cars; there are not hundreds of test facilities in the US. Moreover, trying to create scenarios such as the Tesla collisions with the semi and the median barrier are things that would likely have passed under test conditions, which is what allowed them to think the system was ready in the first place.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

So, if the AV avoids all the PDO crashes that a human driver would have but has a similar level of crashes at all worse levels is that an improvement or not? They'll initially eliminate the PDO crashes without any impact on any of the more severe, then they'll begin to reduce the count of the minor injury crashes without impact on the more sever events, then ....

The AV crash profile will always tilt toward the fatalities, and the more strongly it tilts that way the better they're doing. But the more it tilts that way the greater the percentage of fatalities in the overall mix. When they get to 100% fatalities is that better or worse?

At least that's how it seems to me.

RE: Self Driving Uber Fatality - Thread II

"It isn't enough."

Obviously not, but is that a reason to put the public at risk to continue the experiment?

"...cars not ready for full release being on public roads, is not possible to avoid."

Yes it is; you just don't like what it means for this experiment you love so much.

"So, if the AV avoids all the PDO crashes that a human driver would have but has a similar level of crashes at all worse levels is that an improvement or not?"

First off, that's a big assumption. What makes you think that the AVs can avoid PDO crashes any more successfully than the fatal ones? In any case, it would still be worse. A human driver, once proved fatally incompetent (by say, running down a pedestrian), does not get another chance, regardless of how many miles he or she might have under their belt.

RE: Self Driving Uber Fatality - Thread II

"A human driver, once proved fatally incompetent (by say, running down a pedestrian), does not get another chance, regardless of how many miles he or she might have under their belt."

Not always true. I was in traffic school with someone who had previously had a fatal accident, and was driving again when he was cited for some other traffic violation.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

"I was in traffic school with someone who had previously had a fatal accident..."

If it was truly an accident, and not due to his incompentence, that is understandable. However, if he displayed the kind of poor judgement we've seen from some AVs, his license should be permanently revoked.

RE: Self Driving Uber Fatality - Thread II

Quote (HotRod10)

Yes it is; you just don't like what it means for this experiment you love so much.

If you think I 'love' this 'experiment' than you need to work on reading comprehension.

I am not convinced that the true autonomous network of cars that people like Elon Musk envision is possible at all, let alone possible with current technology. I think the jury is still out, but if I had to guess my opinion shades towards true level 5 capability being, at best, a long way off. Decades off.

The difference between us is not our level of love or hate for autonomous vehicles... it is whether or not we think the technology justifies the testing required to see if it is actually viable.

RE: Self Driving Uber Fatality - Thread II

"...whether or not we think the technology justifies the testing required to see if it is actually viable."

Agreed. That is the sticking point. I don't think it is justified at the current state of the technology, given the risks to the public, especially when most are ignorant of those risks. The problem for those of your view is that even if you win the intellectual argument, you may lose the PR battle. It doesn't take many fatalities to turn public sentiment, especially when the risk of that happening has been downplayed to the point where most generally believe it's not supposed to happen.

Most people, I suspect, were like me, not realizing that AVs were even on the road. After the Tesla incident with the truck, the company stated in strong terms that the "autopilot" was not an autonomous driving system, but a driver assist feature. I had no idea other AVs were even on the roads, and hiding the fact that they are does not help the public perception, especially when the first we hear of it is a fatal crash.

RE: Self Driving Uber Fatality - Thread II

Quote (JgKRI)

What quantity of fatalities represent an acceptable number to you? If your answer is going to be representative of what will happen in the real world, it cannot be zero. So, how many?

Well, obviously that's the bigger question, but it doesn't yet matter much when the cars are driving into things that should be easily avoided....



Quote (HotRod10)

A human driver, once proved fatally incompetent (by say, running down a pedestrian), does not get another chance, regardless of how many miles he or she might have under their belt.

That's wishful thinking....

RE: Self Driving Uber Fatality - Thread II

I don't see AVs as necessarily a means of reducing deaths, although I think there's a strong possibility of that. My biggest desire is for AVs to reduce traffic congestion, by making car motions and maneuvers more coordinated and less subject to human whim. Additionally, the potential elimination of both human reaction time and the potential for cars to intercommunicate among themselves may lead to breaking the speed-flow vs. car density bottleneck.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

"...the potential elimination of...human reaction time..."

From movement of an object toward the path of the vehicle to application of the brakes, is the AV reaction time better or worse than an attentive human driver?

RE: Self Driving Uber Fatality - Thread II

Just a question: If walking across a major street in the wrong place is risky thing to do, if not illegal, then why is that considered an accident when something does happen?

It's not like she did not know, like in the case of a young child.

That the AV did not see her is a problem as this could have been something else it missed.

If the problem with humans really is being board, then would we expect a higher ratio of rural accidents than in cities? And would this not be reversed for AV's because of too much input? Or is the input problem also a factor for humans?

RE: Self Driving Uber Fatality - Thread II

The standard time for human reaction as applied to stopping distance is typically 1.5 seconds. The issue isn't necessarily a single driver's reaction time, but the chain of drivers in the same lane, and how some will jam on their brakes, while others aren't doing that.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

There was a recent case where a drunk driver was involved in a fatal accident, but was judged to be only guilty of the DUI, as the accident itself was judged to be cased by the deceased driver.

As for this pedestrian, while they were certainly not supposed to be crossing the street at that location, in California, the pedestrian has the right of way, even if they are doing so illegally. The fact that the pedestrian was at fault for jaywalking does not alleviate the responsibility of the driver to yield to the pedestrian, much less hit the pedestrian.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Perhaps it would be best to take out the word "accident" and replace it with the word "collision". There are many "collisions" but very few "accidents", since almost all of them have root causes that turn out to be preventable, frequently by both parties involved.

Legal "fault" with Uber's collision may lie with the pedestrian who jaywalked (edit: simul-post suggests that California may give the pedestrian the right of way regardless of jaywalking). But drivers, including self-driving vehicles, also carry some responsibility for avoiding collisions that are legally the other driver's fault. I've already mentioned in this thread, the inadvisability of slamming on the brakes in a full-speed travel lane of a motorway with a fully loaded dump truck filling your rear view mirror. Doesn't matter human driver or automated one. Doesn't matter that getting hit from the rear is legally the responsibility of the driver behind ... the occupants of your vehicle are still flat.

Humans vary widely in their ability to prevent situations that result in a collision. Some people have good situational awareness and take intentional actions to prevent potentially dangerous situations from developing before they happen ... others focus narrowly on the path directly in front of them and are oblivious to any other surroundings or on what's in their mirrors. Good drivers will spot the other vehicle in a lane about to merge with their own and will pro-actively speed up, slow down, or move over so that the other driver can merge without conflict. Others ... not so much. Self-driving? Who knows!

Humans get bored when there is sensory deprivation or when they don't have anything to do. In the case of road traffic, humans certainly can get bored, and it doesn't take babysitting an automated vehicle to do it. Take a long, straight road with few features along it and impose a 55 mph speed limit and strictly enforce it, and you'll find out that arbitrary reductions in speed limits don't necessarily correlate with reducing the number of collisions. Reason ... people get bored, and nod off, or start doing other things (playing on the phone, etc). An automated vehicle won't get bored in those circumstances ... but you can be pretty much assured that the babysitting "driver" will be sleeping.

Humans can most certainly get overwhelmed, too. They don't multitask very well. They may be so fixated on making sure they don't hit the pedestrian who looks like they might be stepping off the curb that they miss the fact that the traffic signal ahead of them has turned red.

RE: Self Driving Uber Fatality - Thread II

Quote (cranky108)

Just a question: If walking across a major street in the wrong place is risky thing to do, if not illegal, then why is that considered an accident when something does happen?

It's not like she did not know, like in the case of a young child.

Just to touch on this subject, the picture below shows the main road adjacent to my community. The speed limit is 45 (55-60 is the norm). All of the houses (+/- 500 units) are to the SE. There are none on the NW side of the road.

The red circle is the bus stop, which are spaced evenly along the road every 500' or so. That is 140' curb to curb. The nearest crosswalks are 1/2 mile in either direction.

Don't even get me started on the turning movements and different ways right-of-way is perceived through those intersections. That causes more peril than the traffic moving along the main road.

RE: Self Driving Uber Fatality - Thread II

There are some things that one can successfully multitask, like watching a movie and eating popcorn. I tried to listen to a lecture and write a technical memo at the same time; that was not pretty.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Quote (LionelHutz)

it doesn't yet matter much when the cars are driving into things that should be easily avoided....

That is exactly the point, though.

Autonomous vehicles will ALWAYS drive into things that *should* be avoided.

Reaching some point where autonomous vehicles have zero accidents of any kind in trillions of miles driven per year is a statistical impossibility. It will not happen, ever.

The question you have to answer, if you're in favor of automated vehicles, is: what accident rate are you willing to accept?

If your answer is precisely zero, we will never get there. It just isn't possible.

If your answer is "a lower statistical rate than humans in the aggregate", the technology today is such that it may be possible right now, but we don't really have enough data to know yet.

If your answer is "a lower statistical rate than some subset of humans that have the least accidents or the least damaging accidents" then the technology today might still be there already, or might not be, or might never be. We, again, don't have enough data to know, and having enough data to know with a high level of statistical certainty is many, many years in the future based on current rates of miles accumulated per year.

RE: Self Driving Uber Fatality - Thread II

"Autonomous vehicles will ALWAYS drive into things that *should* be avoided."

You ignored the "easily" part of the previous statement. The pedestrian death should have NEVER happened, i.e., there should be ZERO probability that specific set of conditions with that pedestrian should result in any collision, short of an outright failure. Now, it's certainly possible that the Uber incident was the result of a serious failure, but we probably won't know for a while.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

It occurred to me on the way home that another reason for disengaging humans from driving is the increased level of "road rage." There's a FastTrak toll road that people use to bypass some of traffic on the SR 91 between Anaheim and Riverside; the on-ramp is often backup for up to a mile in the afternoons. There are invariably people "cutting" the line by zooming up to the last few hundred feet before the on ramp and just inserting themselves, knowing that someone will yield to the aggression, and the psychopath is meanwhile blocking the thru-traffic lane.

Now, if it were just people waiting in line, you'd almost never see people just cutting; so I see this as a symptom of a depersonalization of other drivers, because being rude and obnoxious to a mechanical box doesn't rank anywhere near the level of being rude to an actual, visible, human. So that's something that AVs could eliminate. The AVs would also make the transition to the toll road cleaner and reduce the congestion, since AVs wouldn't slow down going up the grade on that part of the 91 and the traffic would flow better.

Better flow and fewer exhibitions of psychopathic behavior --> less road rage.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Quote (IRStuff)

You ignored the "easily" part of the previous statement. The pedestrian death should have NEVER happened, i.e., there should be ZERO probability that specific set of conditions with that pedestrian should result in any collision, short of an outright failure. Now, it's certainly possible that the Uber incident was the result of a serious failure, but we probably won't know for a while.

I understand what you're saying I just think that "easily" has no effect on the end result. The ideal case is zero accidents. As engineers we know that's not possible; even confining accidents to hardware-failure-only incidents will not drive the total to zero.

There should be ZERO probability that a plane could crash and kill its passengers, yet this happens with relative frequency. This doesn't cause us, as a whole, to stop buying plane tickets.

The 'outlaw all testing of AVs on public roads' reaction to this accident is like if the reaction to the early crashes of the DeHavilland Comet led us to abandon airplanes.

RE: Self Driving Uber Fatality - Thread II

What happened to the word "easily"?

OK, to put it bluntly, the 3 incidents causing death, driving under a truck trailer, into the end or a road barrier or into a person, are NOT acceptable to me. They illustrate the level of improvement still required.

As a general rule, if the capabilities of the car mean it is capable of avoiding something, then that something should be avoided. If the brakes are capable of stopping it before the impact, then it should have stopped. If the tires and suspension are capable of an avoidance maneuver then it should have maneuvered to avoid the accident.

If it is put into a situation where it can't brake or maneuver to avoid, then it should try to minimize. But, it should have some intelligence to recognition a situation that could become impossible to survive intact instead of blissfully driving until the situation presents itself. In other words, take steps like slow down or quit driving beside another car in the next lane when caution is called for.

The developers of these systems have been touting how they will avoid making the stupid mistakes a human does. So, they'd better quit making those mistakes if they are to be acceptable.

RE: Self Driving Uber Fatality - Thread II

Quote (jgKRI)

I understand what you're saying I just think that "easily" has no effect on the end result. The ideal case is zero accidents. As engineers we know that's not possible; even confining accidents to hardware-failure-only incidents will not drive the total to zero.

There should be ZERO probability that a plane could crash and kill its passengers, yet this happens with relative frequency. This doesn't cause us, as a whole, to stop buying plane tickets.

The 'outlaw all testing of AVs on public roads' reaction to this accident is like if the reaction to the early crashes of the DeHavilland Comet led us to abandon airplanes.

Airline crashes killing the passengers happen with regular frequency? Not in the US. The last US-registered, scheduled passenger airline to crash was Colgan Air Flight 3407, which crashed on Feb. 12, 2009, killing 50.

That is due, in large part if not entirely, to extensive regulation. The exact opposite of what AVs being tested on public roads are subjected to.

RE: Self Driving Uber Fatality - Thread II

(OP)
Spartan5,

Regulations about aircraft were written in response to accidents. Nobody anticipated any of this stuff. I would say that unless the robots are shown to be more dangerous than an average driver, or perhaps a 15th percentile driver, they should be allowed to continue. One of the problems with American or Canadian roads is that new communities are built around cars. If you take someone's driving license away, they cannot travel or work or to anything else. The robots are an opportunity to take really bad drivers off the road.

--
JHG

RE: Self Driving Uber Fatality - Thread II

Quote (LionelHutz)

OK, to put it bluntly, the 3 incidents causing death, driving under a truck trailer, into the end or a road barrier or into a person, are NOT acceptable to me. They illustrate the level of improvement still required.

These types of accidents will never stop happening.

They won't. Period. It's impossible.

3 accidents isn't the point. 3 accidents against the number of successful detection/avoidance events (likely, at this point, to number in the hundreds of thousands at least across all companies testing AV tech)is the actual metric that matters.

We don't know the value of that metric.

RE: Self Driving Uber Fatality - Thread II

(OP)
IRstuff,

Robot cars always will have to deal with unpredictable (i.e. non-robot) elements along the right of way. My guess is that the very last people who will be pried out of their cars will be those psychopaths you mention. I drive differently on the highway when I notice someone four feet behind my bumper. I anticipate crazy lane changes. The robots will have to cope with this too, and the serious psychopaths probably will learn the game the robots' behaviour.

Is there a safe way to program a robot to operate in Asshole mode?

--
JHG

RE: Self Driving Uber Fatality - Thread II

drawoh,

Are you claiming there is/was no proactive regulation of aircracft/airlines? That's quite a stretcher.

I think there is a place for limited, controlled testing of these vehicles. But both Uber and Tesla have shown themselves to be negligent in little (Tesla) to no (Uber) basic monitoring of driver attention in what are only intended to be supplemental assistance level automation systems.

And to touch on your last point, to make a generalization, the sorts of people who are having their licenses away are not the sort who can afford to run out and buy the latest and greatest robotic car.

RE: Self Driving Uber Fatality - Thread II

Quote (Spartan5)

Airline crashes killing the passengers happen with regular frequency? Not in the US. The last US-registered, scheduled passenger airline to crash was Colgan Air Flight 3407, which crashed on Feb. 12, 2009, killing 50.

I didn't say airline.

Per the FAA, in 2017 there were 209 incidents and 347 fatalities.

The statistics aren't the point.

The point is, if someone's only acceptable criteria for AVs to be present on public roads is that they will never cause a fatality of any kind under any circumstances from from now until the end of time, that person has a wholly unrealistic point of view with little connection to reality.

RE: Self Driving Uber Fatality - Thread II

Quote (Spartan5)

in what are only intended to be supplemental assistance level automation systems.

This is inaccurate. Uber's intent is level 5 automation of all vehicle operations.

Quote (Spartan5)

the sorts of people who are having their licenses away are not the sort who can afford to run out and buy the latest and greatest robotic car.

This is also inaccurate, and pretty insulting too.

There's a lot of lawyer/doctor types out there with a couple DUIs, no driver's license as a result, and enough money in the bank to buy a fully automated Ferrari if they released one tomorrow.

RE: Self Driving Uber Fatality - Thread II

Quote (jgKRI)

I didn't say airline.

Per the FAA, in 2017 there were 209 incidents and 347 fatalities.

The statistics aren't the point.
You said "still buy tickets" which implies commercial airlines; for which there has not been a fatal crash in the US in the last 10 years. Hardly "relative frequency."

Quote (jgKRI)

The point is, if someone's only acceptable criteria for AVs to be present on public roads is that they will never cause a fatality of any kind under any circumstances from from now until the end of time, that person has a wholly unrealistic point of view with little connection to reality.

This, my friend, is the text book argument of a straw man. Is any one in this thread making that claim? Then why are you arguing against it?

Quote (jgKRI)

This is inaccurate. Uber's intent is level 5 automation of all vehicle operations.
Their intention is irrelevant as it pertains to what they are actually operating on public roads. Of what use is a backup driver if there isn't even a basic system in place to ensure they are paying attention? What they are operating is Level 2 at best. They are negligent for not having simple safeguards in place.

Quote (jgKRI)

There's a lot of lawyer/doctor types out there with a couple DUIs, no driver's license as a result, and enough money in the bank to buy a fully automated Ferrari if they released one tomorrow.

Come ride the bus around the suburbs with me. What do you think the relative percentage of passengers are who are lawyers with a couple of DUI's who can't drive anymore.

I can tell you're not engaged in rational discourse anymore.

RE: Self Driving Uber Fatality - Thread II

Spartan5 - given that it took a fatal accident to get the FAA to issue a requirement to inspect fan blades more than a year after the first incident, I tend to agree that aircraft regulation is almost entirely reactive. In large part it is understandable, simply because of the nearly infinite number of things that can go wrong.

For example, it was typical on commuter twin turboprops to keep the far engine running while passengers were loading. Until a little girl lost hold of a stuffed animal and it blew through the couple foot gap between the fuselage and the pavement. The little girl ducked under to retrieve it before anyone could stop her and now, AfAIK, it's no longer allowed. The NTSB report indicated the change in procedure was at the airline level, not the FAA level, but there may be other guidance. The report indicates it wasn't a fatality, so maybe that's why there's no rule.

RE: Self Driving Uber Fatality - Thread II

Quote (Spartan5)

You said "still buy tickets" which implies commercial airlines; for which there has not been a fatal crash in the US in the last 10 years. Hardly "relative frequency."

If you want to nitpick, you're not helping yourself. Per the data set for 2016 (latest year available from the NTSB), in that year there were 39 accidents which resulted in 27 fatalities involving aircraft operating under 14 CFR 135. Approximately 3 crashes and 2 fatalities a month is pretty regular. Those people bought tickets.

You're missing the entire point. Being condescending isn't helping your argument, either.

If you think rich people are better drivers, I guess you're welcome to go on thinking that. It's a pretty weird thing to say but it isn't really germane to this conversation.

Quote (jgKRI)

The point is, if someone's only acceptable criteria for AVs to be present on public roads is that they will never cause a fatality of any kind under any circumstances from from now until the end of time, that person has a wholly unrealistic point of view with little connection to reality.

This, my friend, is not a straw man- it's actually what's called a reductive argument, directed at positions taken by other people in this thread. Google it.

RE: Self Driving Uber Fatality - Thread II

3DDave,
I agree that accidents lead to regulation. It would be irresponsible for that not to be the case.

But that argument has no bearing on the fact that there is proactive regulation as well intended to prevent accidents from occurring in the first place.

Regardless; if accidents are to be the driver of regulation, where is the requirement that all AVs undergoing testing on public roads have active systems for aggressively monitoring backup driver attention (see the Cadillac "Super Cruise" system for how this could be implemented). That's not a high bar in these vehicles that are already extensively outfitted with sensors and computers.

RE: Self Driving Uber Fatality - Thread II


Before I google "redactive argument", please quote me the people in this thread who have taken the position that "for AVs to be present on public roads is that they will never cause a fatality of any kind under any circumstances from from now until the end of time"

Maybe I missed it, friend.

RE: Self Driving Uber Fatality - Thread II

Quote (Spartan5)

Before I google "redactive argument", please quote me the people in this thread who have taken the position that "for AVs to be present on public roads is that they will never cause a fatality of any kind under any circumstances from from now until the end of time"

A reductive argument is one made by taking a position, which may or may not appear to be logical, based on a fallacy to its logical conclusion in order to highlight the underlying fallacy.

So... No one is stating that explicitly, but it's the logical conclusion of a position taken, based on the fallacy that zero accidents is an attainable goal.

RE: Self Driving Uber Fatality - Thread II

Quote (jgKRI)

So... No one is stating that explicitly, but it's the logical conclusion of a position taken, based on the fallacy that zero accidents is an attainable goal.

Ok. Then support that with quotes again. Who has taken the position that "zero accidents" is the attainable goal?

You appear to be the only person to be bandying "zero accidents" about; and then arguing against it. That's the straw man.

RE: Self Driving Uber Fatality - Thread II

*sigh*

Zero accidents is the reductive portion of the argument.

Maybe just google it and come back after some light reading.

RE: Self Driving Uber Fatality - Thread II

(OP)

Quote (Spartan5)


Are you claiming there is/was no proactive regulation of aircracft/airlines? That's quite a stretcher.

...

And to touch on your last point, to make a generalization, the sorts of people who are having their licenses away are not the sort who can afford to run out and buy the latest and greatest robotic car.

Igor Sikorski built an airliner in Russia just prior to WWI. One fine day about a century ago, someone offered to transport someone else from "here" to "there" for some sort of fee. The pilot almost certainly was licensed. The aircraft almost certainly was fabric covered and had one engine. The rules came in when these things started crashing.

If people are allowed to own robot cars and are not allowed to drive, the psychopath drivers will be replaced by people who want to re-program the robots. There will be accidents. As noted above, I think robot cars will be a service, not a possession.

--
JHG

RE: Self Driving Uber Fatality - Thread II

I think that's just trollmanship; cars, while substantially more reliable in recent years, still have mechanical and software failures. My two hybrids have idiosyncratic startup behaviors, such that if you attempt to put the car into DRIVE before it's ready, it can't gracefully recover without turning off and restarting.

The notion of zero failure is ludicrous, as we, as a society, have "acceptable" risks for everything we do, including our cars, planes, buses, etc. The people that died on a bus on the way to Las Vegas took what they thought was an acceptable risk. Obviously, after the fact, the survivors and family have a different perspective. Nevertheless, probably all of them would get on a similar bus for a similar trip in the future.

Anything humans touch or build automatically incurs a certain level of risk of failure, and in some cases, such as the Colombian bridge disaster, the risk was both tangible and realized, and two engineering analyses point to a massive design failure. Toyota had a failure of their automobile ECUs that resulted in accelerations that couldn't be turned off or stopped. The electronics industry, as a whole, gave up complete testability, even at the basic "stuck at" logic levels, because there were so many hidden nodes that the test times required to access them all would result in years of testing.

Software testing is worse, in some ways, because there's not yet been a systematic way of testing, even at the module level. Intent and specification often cannot be rigorously verified.

What we do need to do is to determine what the acceptable level of risk is and move on. Certainly, those who are actually working on AV software need to study each and every one of the AV accidents to determine how to prevent them from happening again. That's been the model in the airline industry for decades, going back to the DC-10 engine failures that were traced to a less than desirable method of installing engines that American Airlines once used.

While there have not been many collisions of commercial aviation, there still have been deaths, most recently on a 737, where a turbine blade broke loose, tore through all the surrounding armor bounced about 4 times along the wing and fuselage and took out a window that resulted in the death of a passenger. That incident, which should have an unthinkable possibility, is now a realized risk, and there's going to be a bunch of engineers trying to quantify the likelihood of that happening again. Nevertheless, 737s are still flying around at the moment, albeit, subject to more detailed inspections for indications of similar and imminent fatigue failures. We've accepted this sort of thing as an acceptable risk, even with the human element in the entire maintenance and inspection process.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread II

Quote:

jgKRI]*sigh*

Zero accidents is the redactive portion of the argument.

Maybe just google it and come back after some light reading.
I accept your surrender.

I googled "redactive argument, by the way. Nothing comes up wink

RE: Self Driving Uber Fatality - Thread II

Autocorrect is at it again.

Surrender? You might be taking this a little personally.

RE: Self Driving Uber Fatality - Thread II

Spartan5: "That is due, in large part if not entirely, to extensive regulation. The exact opposite of what AVs being tested on public roads are subjected to."

Exactly. Thank you for stating the position I was attempting much better than I did.

RE: Self Driving Uber Fatality - Thread II

"3 accidents against the number of successful detection/avoidance events (likely, at this point, to number in the hundreds of thousands at least across all companies testing AV tech)is the actual metric that matters.

We don't know the value of that metric."

No, we don't. There has been a conspicuous lack of transparency when it comes to how AVs have performed in real-life situations. In the case of Uber's experiment, we do know that the human "backup driver" (read "passenger") had to override the computer every mile on average to correct a critical incident. If you were riding with a human driver and you had to take the wheel every mile, you you ride with that person again? Uber's system in particular is obviously not ready to be on the streets yet.

RE: Self Driving Uber Fatality - Thread II

Quote (HotRod10)

Uber's system in particular is obviously not ready to be on the streets yet.

We're back where we started.

Uber can't improve the system without it being on the streets. It's a chicken/egg problem. Or a catch-22. Or however you want to phrase it.

RE: Self Driving Uber Fatality - Thread II

"Uber can't improve the system without it being on the streets."

Well then, they should abandon the project and leave the AV development to those companies who are willing to go to the effort and expense of thorough testing before introducing a potentially lethal machine into the public arena.

RE: Self Driving Uber Fatality - Thread II

Uber can't do it, an no one else can either. It's integral to further development.

Other companies have different controls in place, but all of them are going to have accidents.

Still a catch-22.

RE: Self Driving Uber Fatality - Thread II

Time for someone to open Thread III.

Dik

RE: Self Driving Uber Fatality - Thread II

There are other ways to reduce road rage than self driving cars. I believe the whole concept of 'traffic calming is one of the issues causing road rage. The stopping at every light leads people to accepting that running many of the lights helps them get to someplace quicker.
And to add to that the sheer number of cars on the road at some times.

A possible solution is companies open or start at different times. Say 7:15, instead of 7:00, or 7:45 instead of 8:00.

But self driving cars won't fix the number of cars on the road, but is likely to increase the number.

Self driving cars also won't fix some road rage, but make it worse, as some people will choose to drive themselves so they can travel faster (they are always late). In fact self driving cars are slower that cars driven by many other drivers.

For self driving cars to be safer, it may take us a redesign of many things, including locations of bus stops, truck allowed colors, jumbled lane markings, etc.

And if the truth be told, the cost of mass transit, is a major issue in the number of cars on the road. As well as dirty conditions, rude people (lack of respect), and the number of people all trying to get someplace at the same time.

Red Flag This Post

Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.

Red Flag Submitted

Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.

Reply To This Thread

Posting in the Eng-Tips forums is a member-only feature.

Click Here to join Eng-Tips and talk with other members! Already a Member? Login



News


Close Box

Join Eng-Tips® Today!

Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.

Here's Why Members Love Eng-Tips Forums:

Register now while it's still free!

Already a member? Close this window and log in.

Join Us             Close