×
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS

Log In

Come Join Us!

Are you an
Engineering professional?
Join Eng-Tips Forums!
  • Talk With Other Members
  • Be Notified Of Responses
    To Your Posts
  • Keyword Search
  • One-Click Access To Your
    Favorite Forums
  • Automated Signatures
    On Your Posts
  • Best Of All, It's Free!
  • Students Click Here

*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.

Posting Guidelines

Promoting, selling, recruiting, coursework and thesis posting is forbidden.

Students Click Here

Jobs

Self Driving Uber Fatality - Thread I
17

Self Driving Uber Fatality - Thread I

Self Driving Uber Fatality - Thread I

(OP)
San Francisco Chronicle

As noted in the article, this was inevitable. We do not yet know the cause. It raises questions.

It is claimed that 95% of accidents are caused by driver error. Are accidents spread fairly evenly across the driver community, or are a few drivers responsible for most accidents? If the latter is true, it creates the possibility that there is a large group of human drivers who are better than a robot can ever be. If you see a pedestrian or cyclist moving erratically along the side of your road, do you slow to pass them? I am very cautious when I pass a stopped bus because I cannot see what is going on in front. We can see patterns, and anticipate outcomes.

Are we all going to have to be taught how to behave when approached by a robot car. Bright clothing at night helps human drivers. Perhaps tiny retro-reflectors sewn to our clothing will help robot LiDARs see us. Can we add codes to erratic, unpredictable things like children andd pets? Pedestrians and bicycles eliminate any possibility that the robots can operate on their own right of way.

Who is responsible if the robot car you are in causes a serious accident? If the robot car manufacturer is responsible, you will not be permitted to own or maintain the car. This is a very different eco-system from what we have now, which is not necessarily a bad thing. Personal automobiles spend about 95% (quick guesstimate on my part) parked. This is not a good use of thousands of dollars of capital.

--
JHG

RE: Self Driving Uber Fatality - Thread I

Well, somebody had to be first :)

The problem with sloppy work is that the supply FAR EXCEEDS the demand

RE: Self Driving Uber Fatality - Thread I

"Are accidents spread fairly evenly across the driver community, or are a few drivers responsible for most accidents?"

Accident rates per mile driven are biased highly towards new drivers (the stats are complex), young drivers, and old drivers. Men are about 50% more likely to crash than women.




"Perhaps tiny retro-reflectors sewn to our clothing will help robot LiDARs see us."

Perhaps, a more promising path is to detect your cell phone.

"Who is responsible if the robot car you are in causes a serious accident? If the robot car manufacturer is responsible, you will not be permitted to own or maintain the car. "

All the main manufacturers for L4 cars have announced they are liable and will self insure, and that current legislation is adequate. I suspect your second sentence is wrong in detail (somebody somewhere will buy an L4 AV) but going in the right direction.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

Without regard to this crash, there are cases where people try to beat locomotives around crossing arms. There are cases where the driver has limited options. Unlike most accidents, autonomous vehicles are most likely to gather all critical information leading to the crash.

RE: Self Driving Uber Fatality - Thread I

The graph that Greg posted doesn't even attempt to consider other factors. Within drivers of the same age group, there are some that are highly skilled and others who are scatterbrained and un-co-ordinated, and others that are risk-takers (some intentional, others just unaware of their surroundings). Some are courteous, others are not. There's probably still an order of magnitude between the best and worst even within the same age group.

We all know someone that we don't want to be in the same vehicle with ...

I suspect that automated drivers may be better than the worst drivers, possibly even at today's technology level, but are not even remotely close to the best drivers and may never be.

RE: Self Driving Uber Fatality - Thread I

I heard, but don't have backup evidence, that the person killed stepped onto a 75 km/hr road from a median and not at a crosswalk. Self-driving car, or human-operated may not have mattered.

RE: Self Driving Uber Fatality - Thread I

That's what the initial reports are saying. It also appears that both the victim, and the person that Uber had hired to operate the vehicle, had backgrounds that were ... interesting.

RE: Self Driving Uber Fatality - Thread I

The police comment saying the video made it clear the accident would have been hard to avoid in any kind of mode is rather false for the autonomous mode. Darkness should have little bearing on the car's ability to have detected her. A self driving car could conceivably drive in complete darkness so what a video camera saw really means little on what the car can detect and be capable of reacting to.

To me, it seems that these autonomous cars do fairly well when reacting to something that is going to collide with them or they are going to collide into. But, they seem to be missing some or all of the "this could become dangerous so I should proceed with caution" programming. You could call it lacking prudence maybe? From what the reports have said so far, it sounds like the car decided to proceed at full speed close enough to this woman that it could not react when she changed paths towards it. The Tesla death demonstrated this as one of the faults too, trying to shoot the space between the truck and trailer wheels at full speed.

The car should have slowed and/or moved further away from the woman, even if she was proceeding in a direction that did not indicate she was going to cross into the path of the car.

RE: Self Driving Uber Fatality - Thread I

Self-driving technology may never be better than the best drivers, but...

It won't get tired, bored, distracted, angry, aggressive, in a hurry, or any of the hundred other things ordinary drivers do. Probably already better than, say, 50% of drivers out there now.

The problem with sloppy work is that the supply FAR EXCEEDS the demand

RE: Self Driving Uber Fatality - Thread I

Yes, Brian but posting an analysis that shows how good good drivers are and how bad the bad drivers are doesn't really help, I think. Here's the best I could find, I don't know what the source data is, or even what is really being plotted, probably crashes of any type in the last year on the x axis, and the proportion of drivers on the y axis.



The Luck curve is if you just take the average crash rate (20.3%) and do the usual probability on it. Not a whole lot different to the underlying data.

Obviously many meat drivers never have an (injury/fatality) accident in their half a million miles of driving in their lifetime. So accident free meat drivers are a hard target to beat, since they have a perfect record.

So I'd rather look at averages.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

https://search.proquest.com/openview/5809c2c3cb007...

also has some interesting stuff



I like this graph. What it is saying is that if you have had 0 or 1 crashes in the previous 6 years, there is a 5% chance of having a crash in the next year. If you've had 5 crashes in the previous 6 years, there's a 25% chance you'll have another crash in the next year.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

Quote (LionelHutz)

The police comment saying the video made it clear the accident would have been hard to avoid in any kind of mode is rather false for the autonomous mode. Darkness should have little bearing on the car's ability to have detected her. A self driving car could conceivably drive in complete darkness so what a video camera saw really means little on what the car can detect and be capable of reacting to.

I know the area in question and it's quite possible that she emerged from the median from behind brush or trees leaving little or no time for a driver (human or not) to react.

Link

RE: Self Driving Uber Fatality - Thread I

I'd ride a bicycle in a hailstorm before I'd ever get in of those things. Can you imagine? Haven't paid your property taxes? The doors would lock and straight to City Hall you'd go. Say something politically incorrect inside the cab? It'd Mary Jo Kopechne you off the nearest bridge. No thanks.

RE: Self Driving Uber Fatality - Thread I

Further to Lionel's statement, I think the autonomous systems are lacking in situational awareness in general.

Hmmm. The lane to my right is ending (or is obstructed up ahead). I should allow for vehicles in that lane to merge with the lane that I'm in, by matching speed with them and aligning myself with a space between those vehicles so that the merge can be done without conflict.

Hmmm. I'm approaching a traffic signal. It just turned yellow. My rear view mirror is filled with dump truck. I'm going through this intersection even if it's a wee smidge into the red by the time I get there.

RE: Self Driving Uber Fatality - Thread I

Quote:

a median and not at a crosswalk

I've heard this defense several times and I really really don't like it. Yes if the person jumped out in front of the car from behind a bush that's one thing, but this suggestion that autonomous cars can't be expected to stop for pedestrians who aren't at cross-walks does the case for autonomous vehicles no favours.

RE: Self Driving Uber Fatality - Thread I

(OP)
Tomfh,

We do not understand the algorithms used with these autonomous cars. We anticipate that people will cross at crosswalks. We anticipate they will be less likely to cross elsewhere.

A big advantage of LiDAR is that it provides its own light source and ought to detect stuff even in complete darkness. I wonder just how rapidly LiDAR detects everything. I have over thirty years experience with LiDAR. It does a finite number of scans per second. Will it detect and make sense of an object moving somewhere other than along the anticipated track?

Dodging a LiDAR equipped vehicle is not the same thing as dodging a vehicle driven by a human.

I slow right down when I pass groups of pedestrians, especially children, when I drive down narrow lanes and in parking lots. How will a robot anticipate threats like this, and how well behaved will humans be when they are following this robot as it tries to interpret threats?

--
JHG

RE: Self Driving Uber Fatality - Thread I

Quote (drawoh)

I slow right down when I pass groups of pedestrians, especially children, when I drive down narrow lanes and in parking lots.

Yeah, you do, I do, but plenty of people don't. Don't get me wrong, the tech has got a ways to go. It'll get better, human drivers will not. My experience is, the cars get better and better, the drivers get worse and worse.

The problem with sloppy work is that the supply FAR EXCEEDS the demand

RE: Self Driving Uber Fatality - Thread I

For those who are statistically minded-
There are a certain number of pedestrians killed each year.
And various autonomous cars get in a certain number of miles each year.
Just based on the miles, how many pedestrians would expect to have been killed by autonomous cars?

By the way, how do we know this was an accident, anyway? Isn't that car "autonomous"? It could very well have been intentional.

RE: Self Driving Uber Fatality - Thread I

Given that link to the location, comparing video from the scene and the Google street view shows the car was on Mill Ave travelling north.

The news reports say she was walking and pushing the bike, and if that is true then she wasn't travelling particularly fast.

The right side of the SUV has the damage so the woman most likely came from the right side into the path of the SUV. If she came from the median on the left then she crossed almost 4 traffic lanes before the SUV hit her which certainly goes against any claims that she suddenly stepped in front of the SUV.

It also appears the accident occurred across from the second leg of the X in the median just past the road drains in the right side curb. Since the ground is rough and hilly past the sidewalk in that area, I find it hard to believe the woman was travelling perpendicular to the roadway or outside the sidewalk before entering the roadway. That location also has a sidewalk and a bike lane between any vegetation and any excuse saying she jumped out from behind a bush doesn't hold much water.

My expectation is that she was travelling southbound on the sidewalk or in the bike lane and then turned to her right to cross to that X path going across the median. Crossing Mill Ave completely and heading into the parking lots or below the overpass on the west side would be as likely a general direction of travel as any.

An alert and aware driver would know the path of someone coming southbound on that sidewalk might turn towards that path through the median.

How much foot and bike traffic is in that area? Would regular drivers of that section of roadway specifically watch for bikes or people on foot crossing at the ends of those X paths?

On another note, that X path "to nowhere" through the median appears to be one dumb road feature. It has no useful function except to dangerously tempt people to shortcut across Mill Ave.

RE: Self Driving Uber Fatality - Thread I

The statistical angle is the entire point of the graphs I found. The best human drivers have perfect records. The worst human drivers smash into things quite often. Whether AVs are worth having very much boils down to where, within those extremes, on average, AVs fall out. My guess is that from say 1 pedestrian death in (WAG) 10 million miles that they are doing worse than the average driver. But according to the first graph I posted, in 10 million miles even the best cohort of drivers would expect 4 crashes per million miles, or 40 crashes in 10 million miles.

I don't know what proportion of crashes result in pedestrian deaths, I don't know how much that 4 per million has changed since 1990 (quite a lot actually), but what I do see is that the numbers aren't immediately screaming that prototype AVs are just randomly mowing down people right, left and centre.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

I have read some interesting commentaries that basically stated the "AI" in these cars is doing a lot of learning by example and a huge amount of data processing. Together, this makes it impossible to log the details of the object processing and subsequent step by step decision making. In other words, it's difficult to figure out exactly why the car did something.


Greg - No the cars are not displaying any signs of being particularly dangerous. But they should do much better than a human, especially in conditions that make driving more difficult. We'll likely never see the detailed report on the accident, but from what I have see so far it appears this accident could have been avoided if the right precautions were taken as the car approached the woman.


From the video in the news reports, belongings were on the ground approximately above the "N Mill Ave" in this street view

RE: Self Driving Uber Fatality - Thread I

Wouldn't the more sophisticated control system be expected to be more responsible for avoiding the collision? For certain we don't have a handle on how the non-AI works.

RE: Self Driving Uber Fatality - Thread I

News: "...10PM... Pushing a bicycle laden with plastic shopping bags, a woman abruptly walked from a center median into a lane of traffic..."

Assume for a moment that the pedestrian in this case was visible, while still walking on the median.

A human driver might have been able to make some assumptions about those circumstances - seeing what is probably a homeless person, in the median, at 10PM, and therefore assume that they're perhaps somewhat less predictable than normal. So an attentive and cautious human driver might either slow down or perhaps even change lanes if possible. Mental alarm bells should be ringing, because of the context.

To help make this point crystal clear: an attentive human driver would certainly take extreme caution if they saw a toddler or small child wandering around in the center median, within a few steps of the lane, not firmly holding hands with a parent. In such an extreme example, one would probably even turn on the 4-way flashers, stop, rescue the child, call the police, etc. Would an autonomous vehicle have any such inkling of the increased risks? Do autonomous vehicles understand 'children', 'holding hands with a parent' and 'not trying to wriggle away', or 'the homeless' yet? It seems not.

It's going to be a very long time before autonomous vehicles have any common sense about the real world. And without such common sense, they'll inevitably get them themselves into accidents of a different sort than would a cautious human driver.

At this point the proponents are forced to abandon any overhyped claims about an AI-driven "Accident-Free" Utopia. (Yes, such ridiculous claims have been made; perhaps from the sidelines and/or marketing departments.)

The more rational proponents can retreat to statistical comparisons, and claim autonomous superiority once the lines cross. While such a comparison is reasonable, it still leaves a vast legal and regulatory quagmire to be sorted out.

RE: Self Driving Uber Fatality - Thread I

(OP)
3DDave,

Your link explicitly describes the problem I was noting. The LiDAR's range and resolution are not sufficient for cars travelling at highway speeds. Think through what the LiDAR or video camera has to do. It has to recognize the object. It has to recognize that this is the same object it saw 100ms ago and that it has moved. If the object is moving steadily, the robot can conclude it will continue to move steadily. Can the robot detect erratic movement or even someone's head turning to indicate a sudden change of direction, possibly in front of the robot?

--
JHG

RE: Self Driving Uber Fatality - Thread I

Quote (JHG)

Personal automobiles spend about 95% (quick guesstimate on my part) parked. This is not a good use of thousands of dollars of capital.

Autonomous Cars may indeed lead to much more Car Sharing, so these quite separate and distinct topics are related.

[To be clear, referring here to Car Sharing (alone, one by one) not Ride Sharing (i.e. car pool).]

It's worth noting that Car Sharing inherently increases total distance driven, road usage, traffic, energy (fuel) expended, and wear and tear. Applies to Taxicabs, Uber services, or any future autonomous fleets wandering around.

Because (A-to-B) + (C-to-D) < (A-to-B) + (B-to-C) + (C-to-D), where (B-to-C) is the 'extra' movement.

Nobody ever seems to think about that. Which is annoying considering how obvious it is. At best, there's some hand waving about future efficiencies somehow compensating for the extra distance.

The distance ratio could probably be determined by asking Taxi or Uber drivers about their total working mileage per year versus how much of that is 'paid' mileage. It would presumably vary by location. Hopefully it's more efficient than 50%, and it can't be 100%; so I'd guess it's about 75%.

Yes, there are many obvious upsides of Car Sharing; but they're well known.

RE: Self Driving Uber Fatality - Thread I

Most drivers currently cannot accurately detect that. At least not so as to take evasive action. Instead they usually just hit the horn and expect the other person to cope. The benefit of AI cars will be that they behave uniformly.

RE: Self Driving Uber Fatality - Thread I

(OP)
VE1BLL,

I also did not account for heavy usage of automobiles at rush hour, when maybe half of them are on the road.

--
JHG

RE: Self Driving Uber Fatality - Thread I

As in the case of the Tesla accident, it's likely that the AI is simply maintaining context information. In the Tesla case, the truck that was hit had to have been detectable prior to it turning across the road, but it's likely the Tesla had essentially "forgotten" that there was even a truck in the vicinity.

Likewise, it's likely the LIDAR on the Uber detected the victim well before the impact, but essentially forgot the detection once a new set of detections were acquired.

We used to have a guy that worked on tracking algorithms, and when queried about why the tracker had clearly ignored a previous detected target, he stated, "Oh, I only use data from the current frame for detections."

A human driver, having detected a pedestrian might pay more attention than normal to insure that the they can avoid the consequences of the pedestrian doing something silly, which the victim did. If I see a pedestrian too close to the road edge, I will sometimes change lanes to be further away, just in case.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Quote (GregLocock)

Accident rates per mile driven are biased highly towards new drivers (the stats are complex), young drivers, and old drivers
.

Does this account for young drivers doing most of the driving?

Dik

RE: Self Driving Uber Fatality - Thread I

(OP)
IRstuff,

If the robot cannot maintain context information, then it cannot determine the direction and velocity of whatever object it sees. I noticed this this morning with my car GPS. It does not know what direction I am pointed in until I start moving.

--
JHG

RE: Self Driving Uber Fatality - Thread I

"...the Tesla had essentially 'forgotten' [about the] ...truck..."

Based on the reports, I thought that the Tesla had failed to even see the truck due to a lack of contrast against the sky. It had been mentioned that the Tesla had utterly failed to brake before, during, and even after the crash. It was a fairly comprehensive failure, seeming to do precisely nothing correctly.

Perhaps the findings have changed since I last saw it.

Old curse: "May you live during interesting times."

RE: Self Driving Uber Fatality - Thread I

I have a rental car for the week. It has lane-departure warning that's supposed to beep at you when you go out of your lane. The display also indicates whether it is currently recognizing the lane.

It only works with painted lane markings that are clearly visible.

It doesn't recognize guardrails, unmarked roadways, painted lines that are worn down or obscured by dust or dirt or damaged/repaired pavement. If there is a visible transition in the pavement that is separate from the painted lane marking (e.g. in construction zones where the temporary lane position doesn't correspond with what it's meant to be by design) it sometimes sees the wrong one and false-triggers. It gets confused on curves. It gets confused in roundabouts. If I intentionally shift to the side of a clearly marked lane for a rational purpose - e.g. to be further away from a vehicle that appears errant or is throwing off wind-buffeting, or to smooth out an errant lane marking - it beeps because it doesn't realize that what I'm doing has a purpose. I haven't tried it in rain at night when the shine from the rain makes lane markings hard to distinguish.

In other words, it works only in situations where it isn't needed, and it hardly ever works in situations where it might serve some purpose.

I've had other rental cars that have blind-spot warning systems (this one doesn't have that) and they don't detect something coming up from behind in the adjacent lane at a significant speed difference. They don't look far enough behind. IIRC the Germans criticised Tesla's autopilot for that ... and in a situation where you're doing 130 km/h and the car coming up from behind is legally doing 230 km/h, that's pretty important.

RE: Self Driving Uber Fatality - Thread I

"Based on the reports, I thought that the Tesla had failed to even see the truck due to a lack of contrast against the sky. It had been mentioned that the Tesla had utterly failed to brake before, during, and even after the crash. It was a fairly comprehensive failure, seeming to do precisely nothing correctly."

The claim was that the sides of the trailer were white, and confused the image processor. But, it order for the side of the trailer to get that far into the field of view of the camera, the tractor had to have passed through the field of view, so the image processor must have "seen" the tractor, but didn't worry that it couldn't see the trailer.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

The Tesla could 'see' that the road was continuous under the trailer, just the same as it can 'see' the road is continuous under overpasses and overhead signs. It did not matter that it could see the tractor leave the path; not all tractors have trailers.

Look at 11foot8.com. Which just nailed a 'secret military' truck. Even humans stink at this and that's with warning signs, flashing lights, and every other option available to deflect the stupid from hitting a bridge that is older than most people. And people wonder why military equipment costs so much.

RE: Self Driving Uber Fatality - Thread I

The object of issue, the tractor, had passed out of the collision zone. If the car doesn't recognize the invisible trailer that no-longer-dangerous tractor is hauling, we're still back to the fact that the software did not recognize a secondary danger. The tractor is no different than another car... so we we write the algorithm to more fully recognize a tractor and expect a trailer might be attached, no matter the color?

Dan - Owner
http://www.Hi-TecDesigns.com

RE: Self Driving Uber Fatality - Thread I

"...must have 'seen' the tractor, but didn't worry that it couldn't see the trailer."

A.I. sometimes means Artificial Imbecile. smile

We'd all be safer if the decision makers kept this in mind.

RE: Self Driving Uber Fatality - Thread I

For the Tesla accident, forget for a moment that it failed to identify the trailer. What I consider the second and worse failure of the Tesla goes like this. It should have detecting the wheels of the trailer moving towards it's path. So, a truck (or big car) had just crossed it's path and another object was moving towards it's path. I have no idea what it thought the trailer wheels were, but if a vehicle had just crossed your path and there was another "thing" moving in that same direction then wouldn't you proceed with caution instead of trying to blast through the gap at full speed???

RE: Self Driving Uber Fatality - Thread I

The Google street view of the Tesla accident location shows it was a quite flat stretch of road. The wheels would not have been hidden as the car was approaching.

RE: Self Driving Uber Fatality - Thread I

The police released the video. Two things stand out. One is that all reflectors look to have been removed from the bicycle, including the critical ones on the wheels. The other is that the 'projector' head lamps produce such a sharp cut-off that no light is above the cut-off, meaning the contrast is extreme. Frankly, I would place a lot of blame on the head lamp design, which puts far too much light close up and produces so much contrast that anything outside the beam is practically invisible.

I think that the woman would have been more visible if the headlamps were off and the amplification of the image higher.

Not helping is what looks like a black jacket and low levels of lighting from local streetlamps. Also it's not helping that neither the driver nor the pedestrian seems engaged with the situation.

I don't know if an alert driver would have done much better. Even though I know where the victim will be, until less than 1/2 second from impact I can't make out any evidence of her. There was no lighting behind her that was being eclipsed as she went across the road; not even from retro-reflective striping paint on the road.

I expect the NTSB will be interested in this and I look forward to a report as to why the Lidar and radar sensors specifically failed to detect her. There should have been plenty of time for several cycles. It wouldn't even require target path prediction.

RE: Self Driving Uber Fatality - Thread I

(OP)
Slate magazine has linked the video (warning, warning, etc.). In the video, she was not visible until the last split second. I assume the headlights were dimmed, but I would think there should have been more forward visibility than what we see.

It looks like she crossed between areas lit by streetlights. I wonder how well cameras respond to changes in light level. Our (human) eyes cope way better at changing light levels than digital cameras. This may be part of learning to walk in the vicinity of robot vehicles.

--
JHG

RE: Self Driving Uber Fatality - Thread I

Quote (SnTMan)

My experience is, the cars get better and better, the drivers get worse and worse.
A Truism if there ever was one!! thumbsup2

"Schiefgehen wird, was schiefgehen kann" - das Murphygesetz

RE: Self Driving Uber Fatality - Thread I

That's a very bad video for UBER. When driving at night you can see objects in adjacent lanes ahead. Combined with the LIDAR there is no excuse for the car's object avoidance to have failed so badly. It didn't even attempt to slow down.

RE: Self Driving Uber Fatality - Thread I

I agree it's bad. They're not showing the LIDAR data, which should have detected the person well before they show up in the video.

The LIDAR should be able to detect obstacles out to well past 200 ft, so it had to have detected the person at least 2.5 seconds before the person was visible in the video. This is a major fubar in the systems engineering.

The Uber should also have a radar, which should also have detected the person.

This is again a context issue, since even a non-moving obstacle in what should have been an unoccupied lane is a major deviation from normality. Moreover, given the range and the obvious motion of the person, the sensors should have been able to easily determine that there was going to be an intersection in trajectories.

The Uber supposedly has a "camera array," and at least some of them should have been configured for low-light.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

(OP)
IRstuff,

The LiDAR does not require ambient light. Then again, there is a huge confict between resolution and scan rate. I think Elon Musk may be right about LiDAR.

--
JHG

RE: Self Driving Uber Fatality - Thread I

Didn't say it did. The Uber has at least one camera as evidenced by the video, but there are also other cameras angled to the side. One would think there would also be low-light cameras as well.

Below is something like what the lidar should have seen. I think Musk is crazy. Humans get into accidents precisely because they don't have enough bandwidth and detection capability; this is where lidar or radar could trivially provide additional sensing and processing capability. Hypothetically, the video is what a human driver might have been able to see, but lidar would have and should have detected the pedestrian and provided warning that something was in the adjacent lane coming up. Had it been working correctly, it should have also determined that the anomaly was moving toward the car's own lane.




TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

We were flying a helicopter obstacle avoidance lidar in 1994 with only about 100 kHz pulse rate. We weren't trying to detect collisions against movers, though, but today's processors are more than capable of doing so. And, today, at least 10x higher pulse rate should be possible, particularly for only 1/4 of the range that we were achieving.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

She did cross from the path in the center meaning she did cross about 3.5 lanes width of open road before getting hit. That was nothing abrupt about her movement despite what the news reports claim.

That video makes the incident very damning against uber. The car totally failed in detecting an object that was clear visible to it's sensors and was crossing into it's path for some time before the impact occurred. I'd guess she would have been on the road and easily detectable for 4-5 seconds before the impact happened which is lots of time for the car to react.

Feet appear in the video maybe 1.5 seconds before the collision. At 38mph, that is about 85 feet. I've never been in a Volvo with HID projector lights, but I've been in other cars with them and the low beams project enough light to make objects in front of the car visible to a distance that is much greater than 85'. So, I would say the camera taking that video suffers from a contrast limitation limiting the visible distance compared to what a human could see.

I'd bet a driver who was paying attention and is capable of steering avoidance instead of just freaking out and slamming on the brakes could have avoided her. The autonomous driving system should have easily avoided her too.

Blaming the sensors is a non-starter. If that accident was caused by sensor limitations, then better sensing must be developed for these cars.

Musk's argument is hinged on that fact that it's an expensive sensor so the system will be much cheaper without it.

RE: Self Driving Uber Fatality - Thread I

Not blaming the sensors; blaming the processing of the data.

Just because it could be cheaper doesn't mean that it's the right answer, particularly if it winds up being no better than a human with terrible night vision.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

@dik, "Does this account for young drivers doing most of the driving?"

The first graph is per million miles so yes I think that it does account for the higher annual mileage of younger drivers.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

One of the factors in driving statistics is that they become heavily skewed with highway miles. There are fewer chances for interactions between pedestrians and vehicles on interstates and rural highways. Even with the following breakdown, a critical segment of information that is missing are the primary and secondary opportunities for collision. For example, this collision was a primary opportunity, where a pedestrian has a clear view and probably can hear the oncoming vehicle and positions themselves in the lane. Secondary opportunities are when pedestrians are on the edge of the lane or on the sidewalk in a location that requires a vehicle to leave it's position in the lane to strike them.

My supposition is that most miles driven have either no pedestrians present at all or have only secondary opportunities. One might argue about places like downtown Manhattan where in some areas and times pedestrians would be trapped if they were to never step into traffic lanes, but they are cognizant that drivers are less likely to yield, but that is more of a parking lot situation than a driving one.

In the following it seems to me that the largest factor is pedestrian behavior. Buses are probably very high because they are attractants and dispersants of pedestrians; lots of people on foot nearby and operate in the lane alongside sidewalks. Heavy trucks are probably low because they don't operate near pedestrians (example: fewer people near warehouses) and pedestrians can easily identify them.

(reformatted from http://injuryprevention.bmj.com/content/11/4/232 , says is based on 2002 US DOT statistics)
(Edit: RR = Relative Rate)

Passenger cars and light trucks (vans, pickups, and sport utility vehicles) accounted for 46.1% and 39.1%, respectively, of the 4875 deaths, with the remainder split among motorcycles, buses, and heavy trucks.

Compared with cars, the RR of killing a pedestrian per vehicle mile was
7.97 (95% CI 6.33 to 10.04) for buses;
1.93 (95% CI 1.30 to 2.86) for motorcycles;
1.45 (95% CI 1.37 to 1.55) for light trucks, and
0.96 (95% CI 0.79 to 1.18) for heavy trucks.

Compared with cars,
buses were 11.85 times (95% CI 6.07 to 23.12) and
motorcycles were 3.77 times (95% CI 1.40 to 10.20)
more likely per mile to kill children 0–14 years old.

Buses were 16.70 times (95% CI 7.30 to 38.19) more likely to kill adults age 85 or older than were cars.

The risk of killing a pedestrian per vehicle mile traveled in an urban area was 1.57 times (95% CI 1.47 to 1.67) the risk in a rural area.

RE: Self Driving Uber Fatality - Thread I

It's taken a long time to get to something that even remotely resembles a true AI, and it's still got a long way to go. AIs are going to get drunk, and aren't going to fall asleep, and that latter feature would have been a godsend back when I was driving back home from college after pulling a week of all-nighters. Micronaps at 90 mph was scary. But, clearly, the Tesla and Uber incidents show that the AIs still have a long way to go before they're as robust as I think they should be. Neither of those two accidents seem to be a fault of the sensor technology; they seem to be a fault of the systems engineering or the programming, as both look to be well within the possible use cases.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

(OP)
IRstuff,

Take your digital camera out and try shooting in limited light. Film emulsion, CCDs, and CMOSs do not have the bandwidth of a human eye. That woman was invisible to the camera until the last split second. She would not have been invisible to the driver if he had kept his head up.

--
JHG

RE: Self Driving Uber Fatality - Thread I

IRstuff, and they don't take their eyes off the road to text either.
Just this morning, I was on the same road where this accident happened- the car in front of me drifted over the lane divider line and stayed there for close to a quarter mile. Too busy texting to even realize that they were taking up 2 lanes.

I have been driving in the area they have been testing these vehicles for many months. I was skeptical when they started doing this, but I have never seen one make what I would consider a dangerous maneuver.

The more we hear about the Tempe accident, the more it sounds like it was probably the pedestrian's fault. There are many large bushes along this stretch of road, it seems likely that the sensors didn't even know the pedestrian was there until it was too late. One of the scariest moment I had while diving (only a couple miles from this site), was when a mountain biker darted out from behind some bushes while I was driving the speed limit. He came to a quick stop and almost went over his handle bars just a few feet in front of me. There was no warning that he was approaching (he was not on a trail)- and I would have had no chance of stopping if he had continued into traffic.

RE: Self Driving Uber Fatality - Thread I

"Take your digital camera out and try shooting in limited light. Film emulsion, CCDs, and CMOSs do not have the bandwidth of a human eye. That woman was invisible to the camera until the last split second. She would not have been invisible to the driver if he had kept his head up."

btw, the safety driver was a woman. But, this is not your, or my, digital camera; any intensified camera with IR cut filter removed can see in starlight alone. Moreover, even the tiny bit of light from the headlights would be more than enough for even a moderately intensified camera, or even an HDR camera. It's unimaginable that the engineers wouldn't have at least run HDR, which is even available on cell phones, specifically for this type of use case. HDR, when properly implemented, substantially outperforms the instantaneous dynamic range of the human eyeball. The headlights could clearly illuminate adjacent lanes out to at least 100 ft, so HDR should have picked up the pedestrian in video camera.

And, since this is NOT a Tesla, the lidar, as was pointed out earlier, doesn't need any ambient light. If Uber had depended on using just that video for collision avoidance, they should have never gotten authorization for full autonomous driving, and they should be rightly sued for every penny a good lawyer can get from them.
I'm not even sure what you mean by bandwidth; the human eyeball has about a 150 millisecond averaging time, which is why it's typically happy with 24-fps imagery, while even a cheap Epson camera can do 200 frames a second. The pedestrian was WALKING, not running, not riding, a bicycle across the road, so bandwidth isn't even that relevant.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

"The more we hear about the Tempe accident, the more it sounds like it was probably the pedestrian's fault. There are many large bushes along this stretch of road, it seems likely that the sensors didn't even know the pedestrian was there until it was too late. "

The pedestrian is clearly not looking, so to that extent, they could have avoided the incident, but, again, the pedestrian wasn't moving fast, and the only issue, aside from not paying attention, is that they crossed in the worst possible spot in the section of the road, right past where the street light actually lights the pavement. Moreover, they were in the left hand lane, not hiding behind bushes. The car failed miserably in a number of ways in a foreseeable use-case. The pedestrian's feet are clearly visible within the car's lane at about 50 ft from the point of impact. The car should have been braking or swerving well before the impact. Had the car reacted at all in the half-second before the impact, there might a reasonable argument, but it didn't react, even when the pedestrian was fully illuminated by the headlights.

btw, video such as what is posed on the web, doesn't come close to displaying the true dynamic range of even the cheapest camera. There's almost nothing on the market that doesn't digitize at least 12 bits/color, but most video formats are 8-bit/color. And not to mention that display video uses AGC, which suppresses detail that might otherwise be clearly visible. It's certainly in Uber's financial interest to NOT show what the cameras probably did see.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Clearly a pedestrian bridge over Mill at this location is needed.

RE: Self Driving Uber Fatality - Thread I

Quote (drawoh)

IRstuff,

Take your digital camera out and try shooting in limited light. Film emulsion, CCDs, and CMOSs do not have the bandwidth of a human eye. That woman was invisible to the camera until the last split second. She would not have been invisible to the driver if he had kept his head up.

That vehicle had LIDAR. It should have seen the bike and person regardless. But anything reflective like those shoes or bicycle reflectors would have been screaming at that sensor.

It most likely is a breakdown in processing and programming. Even with our limited vision relative to LIDAR, we can differentiate between a couple of mylar potato chip bags blowing across the road and a pair of tennis shoes. Good drivers even have muscle memory that automatically takes over to avoid those collisions. We can't see animals for anything at night. But those that drive amongst them know that two beady little specs of light low to the shoulder mean to focus our attention if we don't want want to kill someone's pet; and that two beady specs of light at chest height mean to slow down immediately if we don't want to wind up in the body shop.

RE: Self Driving Uber Fatality - Thread I

"Clearly a pedestrian bridge over Mill at this location is needed."

So long as it's not the one in Florida.

As an example of what HDR can do, right now, the first image is comparable to what's in the video, but the camera probably saw something like the second image, and this is without headlights. Note that the second image specifically remaps the dynamic range of the HDR into a standard display dynamic range. I think the Uber's cameras should have seen what this second image looks like, and not what's in the video on YouTube.



TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

I just saw the dash cam video from the Uber car- I retract my previous statement. The car was in the right lane, and the pedestrian was traveling left to right. She did not come out from behind bushes.
I now agree that the lidar should have seen her.

RE: Self Driving Uber Fatality - Thread I

Pedestrian behavior is decidedly irrational. I have to pass a church every day on my way home. It's a large church and they must be bad sinners because they have to attend multiple nights of the week. Their parking is on the other side of the street from the church. There are two cross walks at a traffic light complete with push to walk buttons and complete stoppage of traffic for crossing that are almost completely ignored. There are two more cross walks in the middle of the block with signs in the middle of the road and a cop directing traffic. Never the less, the vast majority choose to jay walk. They step out of clusters of pedestrians on the sidewalk or from between parked cars and just head into the middle of the lanes expecting divine protection from the almighty to keep them safe.

----------------------------------------

The Help for this program was created in Windows Help format, which depends on a feature that isn't included in this version of Windows.

RE: Self Driving Uber Fatality - Thread I

I'm actually surprised there isn't an infrared camera incorporated into their sensors. It would seems that it could easily augment the visual light sensors and associated programming, and would have the benefit of cutting through fog and darkness. Finally, it would highlight cars, deer, and people and allow easy recognition by the computer.

Professional Engineer (ME, NH, MA) Structural Engineer (IL)
American Concrete Industries
https://www.facebook.com/AmericanConcrete/

RE: Self Driving Uber Fatality - Thread I

Uber uses LIDAR and radar sensors that don't depend on visible light. It's most likely that the suite of sensors did detect the pedestrian and bicycle (they should have been able to), or at least detected "something there", but chose to ignore them.

RE: Self Driving Uber Fatality - Thread I

2
JHG mentioned, "Film emulsion, CCDs, and CMOSs do not have the bandwidth of a human eye."

The issue here might be the maximum effective contrast ratio.

The human eye is typically much better than cameras to start, *and* can also dart about and quickly peek into the shadows. In the real world, I sometimes hold my hand up to block an overly-bright street lamp, so I can better see into a dark area.

The regulators may have to impose some basic Vision Tests on new self-driving vehicles.

Investigators should consider this when reviewing the video. They might need to ask the next questions:
  • How come your cameras couldn't see the pedestrian?
  • Who specified the inappropriate cameras?
  • Who is your System Safety Engineer?
edit: But if "Uber uses LIDAR and radar sensors...", then it may not be a primary issue.

RE: Self Driving Uber Fatality - Thread I

I think a vision test is a good idea for a self driving car, as they are required of humans.

I have to ask, would this type of car drive at full speed in fog? Most sane humans would not drive full speed if their vision was impaired.

Also are there any tests of these cars in areas where the roads may not be in the best conditions?

It appears the driving might only be in good conditions, so as to improve reliability numbers.

RE: Self Driving Uber Fatality - Thread I

The vision test for drivers is more about being able to read signs, as people are often distracted trying to figure out signs. The lighting is a slight dim office ambient, which no way resembles the nighttime ambient of the Uber accident. There are lots of people with odd vision artifacts at night, which aren't tested by the DMV. I don't recall if they even still do the depth perception test.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

(OP)
IRstuff,

Another problem with LiDAR is that when there are a lot of them, they will be seeing each other's signals. This would not have been a problem with this accident on a fairly lonely road, but imagine moving through a downtown intersection.

--
JHG

RE: Self Driving Uber Fatality - Thread I

Re drawoh's video link: Interesting footage of the safety driver, or whatever they're called...

The problem with sloppy work is that the supply FAR EXCEEDS the demand

RE: Self Driving Uber Fatality - Thread I

I'll never understand the appeal of those things. I don't even like to have my gears shifted for me.

RE: Self Driving Uber Fatality - Thread I

Archie264, I kind of look at them the way I do busses. A great thing for other people :)

The problem with sloppy work is that the supply FAR EXCEEDS the demand

RE: Self Driving Uber Fatality - Thread I

With my deteriorating vision and limbs I'm actually hoping they get these things working by the time I'm no longer able to drive myself. They've got about a dozen years (I hope).

----------------------------------------

The Help for this program was created in Windows Help format, which depends on a feature that isn't included in this version of Windows.

RE: Self Driving Uber Fatality - Thread I

"Basic vision tests" will result in the programmers gaming the test. Designs are tweaked to pass specific tests all the time ... sometimes legally/legitimately, sometimes not.

I have a feeling, but nothing more, that the issue here wasn't whether the pedestrian and bicycle were detected, but that the underlying logic chose to ignore it.

If there is a pothole, or a bump, or a painted road marking, or a small piece of debris, lying on the road directly in front of the car, you don't want the self-driving logic to slam on the brakes or take drastic evasive action.

Somehow the system has to distinguish between something it needs to avoid, and something it can safely ignore and drive over or through.

Select wrongly ... and a situation like this one happens.

RE: Self Driving Uber Fatality - Thread I

(OP)
VE1BLL,

Thanks. That was the term I was looking for.

--
JHG

RE: Self Driving Uber Fatality - Thread I

She was probably classified as an inanimate fixed object, like a bush. Inanimate fixed objects stay on the side of the road and don't end up in the vehicle lane so they're not a threat. Sure, it was an oddly moving bush, but still a bush is nothing to be concerned about.

IRstuff - I didn't say you were blaming the sensors. I was just making a general observation that that any excuse about her not being detectable is complete BS. You and I are both on the same page believing this was a complete and utter fail for the AI system.

As I already pointed out, the distance where she becomes visible is much closer than the distance I can see when behind the wheel. So the camera that was filming that video definitely had a contrast issue and did not show what a human could see. It very much works in Uber's favor, at least for the people who are clueless about the capabilities of the camera used to film that video and/or the capabilities of the sensor package being used by the AI driving the car.

RE: Self Driving Uber Fatality - Thread I

As I said before, I don't believe that the video that was posted is a reasonable rendering of the actual video data that resides in the car. The actual navigation video is probably as damning as the lidar and radar data.

The issue with the bush theory is that it's in the car's lane for at least one second (from the time the feet are visible to the time of impact), and no warnings, no detections, and no braking occurs.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Regarding the bush theory, not only is she in the travel lane in which she was hit for nearly 2 seconds (4 steps), she had crossed some 35+' of paved travel lanes to get there. Kind of ironic that there is a "BEGIN RIGHT TURN LANE - YIELD TO BIKES" sign right there.

Here is the link to where she was hit. https://goo.gl/maps/9qhLm8pJhcE2

She had the potential to be seen for over 300' from where the Volvo came out from under the overpass. The second picture is the streetview from that vantage point.


RE: Self Driving Uber Fatality - Thread I

Quote (drawoh)

Another problem with LiDAR is that when there are a lot of them, they will be seeing each other's signals.

Oh will they end up scrambling each other's vision? Like everyone shinign torches in each otehrs face?

RE: Self Driving Uber Fatality - Thread I

There are potential ways to avoid interference, such as the protocols used by GPS to avoid interference between satellite signals. One of the reasons cold boot takes a long time is that the channel receivers have to decode the sequences and then verify that that all the signals being received are consistent with the receivers decoding of the signals.

Alternately, one could imagine using something like programmable quantum cascade lasers with unique wavelengths.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

I presume these cars must have some kind of "decision log" .. "I have detected XYZ therefore I shall do ABC in response".

I wonder what this log would show in this case? I would be willing to bet that the woman + bike was detected by the sensors, but for whatever reason was not attributed to be a threat to be avoided? Perhaps she looked like a motorbike merging from the left into the lane in front of the car, and thus did not need to be avoided?

I wonder what decision is made when it is 'too late' to avoid a threat in a safe manner? Do you prioritise occupant safety (e.g. avoid sudden / dangerous braking which might result in a pile up) or do you prioritise pedestrian safety (and do absolutely anything possible to slow down before hitting them, even if this might escalate into a pileup)?

The video of the 'person behind the wheel' is shocking. The amount of inattention she is paying is criminal.

RE: Self Driving Uber Fatality - Thread I

Don't like bush. Tumbleweed???

The Tesla decided the truck trailer which was in the path of it's windshield was not a concern since it was classified as an overhead sign and signs don't move and cars are supposed to be able to drive under them....

The most likely cause was her being classified wrong caused the "AI" to decide she wasn't a threat the car could hit. Being classified as a motorcycle travelling the same direction and changing into the same lane doesn't make much sense from the point that the car was rapidly approaching it, and it should brake or otherwise to try to avoid any another vehicle it is rapidly approaching. Being classified as an object that the car is not supposed to be able to hit makes more sense than a wrong one that moves and that it could hit.

The weather report for Tempe last Sunday says the winds were gusting to 26 mph. Was the foliage on the sides of the road blowing around enough to confuse the "AI"?

We can all speculate, but we'll only find out what really went wrong if/when Uber releases any findings on the accident.

Spartan5 - yes, I already posted a link to the street view pretty much pointing out the spot it happened even before the video was released. So, I'm quite aware of the location and street configuration. The evidence so far makes it quite clear there was lots of time for detection and she didn't abruptly dart into the cars path, so something else went wrong.

RE: Self Driving Uber Fatality - Thread I

Is it possible to incorporate all LiDAR data to give a better 3D layout of the area?

Dik

RE: Self Driving Uber Fatality - Thread I

(OP)
Tomfh,

LiDAR fires a laser. A few micronanoseconds after the laser fires, the receiver sees what is called the t0 blast. The LiDAR electronics start counting, waiting for the signal to bounce off something and reflect back into the receiver lens. The receiver probably will have a narrow band interference filter that excludes all light that is not within say 2nm of the laser signal. LiDARs now are fairly rare. That 905nm signal you are detecting almost certainly is yours. If fifty cars all have LiDAR, that signal almost certainly is not yours. You have no way to make sense of the other signals. There is not enough bandwidth to give each vehicle its own laser wavelength, even if this were practical in cost sensitive production.

The company I worked for was developing airborne LiDARs that flew high enough, and ran at high enough laser pulse rates that the lasers were firing before the previous pulse came back from the ground. There were all sorts of tricky electronics for dealing with that. Of course, this would not be a problem for a car approaching woman pushing a bicycle across the highway. LiDAR scanners are a whole lot of fun

--
JHG

RE: Self Driving Uber Fatality - Thread I

drawoh: Thanks... didn't realise that the pulse it sends out is for timing and distance.

Dik

RE: Self Driving Uber Fatality - Thread I

lidars measure distance using their time of flight (TOF) [TOF/(2*c) = distance]. Typical pulse widths are on the order of nanoseconds, as are the times of flight; for 100-m radius coverage, TOF is 667 ns.

The design beamwidth of a lidar might be on the order of 1.5 mrad, which is less than 0.1 deg. lidars need to be scanned to cover the 50 deg or so of frontage, so the receivers are aligned with and have fields of view (5 mrad-ish) comparable to the beamwidths. There needs to be a larger receiver FOV than beamwidth to allow for physical misalignment and TOF during the scan. Opposing lidar beam on own receiver will be very, given the small beamwidths and FOVs, and masking by other cars. But, such events are essentially non-events in the sense that the strength of the signals are likely to saturate the receivers.

Returns from cars going in the same direction are likewise relatively rare, as there also masking by other cars, and the limited time and angles over which a Lambertian return can actually get into the FOV of a receiver. Nevertheless, the interference can be mitigated by a pulse-coding scheme with a matched filter receiver. Additional mitigators could be varying the pulse energy as a function of traffic congestion, since having a car 20 ft in front of you means that firing the lidar to find a 100-m distant target is not realistic.

Additionally, the collision avoidance processor needs to maintain a 3D database of detected objects, and generate trajectories as required, and apply a fading memory to kill of older and no longer relevant objects.



TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Haven't dealt with LiDAR, but I have to wonder what techniques they use that are similar to spread spectrum stuff... Gold Codes and the like. Something along those lines would improve the cross-correlation of "your" signal versus those of the other 500 cars in visible range, but would require a bit more computation.

Also found this interesting snippet:
https://spectrum.ieee.org/cars-that-think/transpor...

Dan - Owner
http://www.Hi-TecDesigns.com

RE: Self Driving Uber Fatality - Thread I

The victim was walking her bicycle across the road. She was already in the left lane for an extended period. She did not dart out of the shadows; the Uber's headlights eventually reached her. The Uber drove past its headlights range, and its Lidar obviously failed to help.

It's a comprehensive failure.

IEEE Spectrum

Screen capture cropped:

RE: Self Driving Uber Fatality - Thread I

IEEE Spectrum, "Although the video shows the pedestrian appearing almost out of thin air, she would have, in fact, crossed two turn lanes, a through lane, and half of the Uber car’s lane before being struck—that’s roughly 42 feet. Walking at a speed of 3.5 feet per second, the design walking speed for traffic light 'walk' signals, she would have been on the road for more than 10 seconds before impact."

RE: Self Driving Uber Fatality - Thread I

Maybe a dumb question, but who called this in?
Would the car automatically do this? Or would it have kept on going?
Can an AI be ticketed for hit and run?

RE: Self Driving Uber Fatality - Thread I

Daytime view of that location attached. The accident would have been approximately where the car in the right lane is located in the picture.
Note the pavers in the median on the left side. Just off the picture is a no walking sign the city posted in that area. I gather this must have been a problem even before this incident.

RE: Self Driving Uber Fatality - Thread I

Regarding “coming out of the shadows;” locals are starting to draw attention to just how poor the quality of the video that has been released is relative to what it is actually like at night there.

Another dash cam still from that exact spot:

RE: Self Driving Uber Fatality - Thread I

The police essentially aided and abetted Uber in potentially steering public opinion about their culpability in the accident. This is a huge fail, particularly given the example dashcam image with decent histogram equalization.

Uber shot themselves in the foot with their video, because the better video shows that TWO sensor systems failed to operate correctly; the video cameras should have been capable of seeing the pedestrian, and simple change detection would have detected the lateral motion into the car's lane and the lidar likewise should have detected the pedestrian from double or triple that distance.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Incidentally here's a video of a LiDAR system in action https://youtu.be/_EMAoiqLq9Y

Eyeballing the resolution and range the LiDAR was functioning it seems hard to believe that it would not have picked up a pedestrian several seconds before impact.

Here however is an old blog on the subject https://recast.ai/blog/the-era-of-smart-cars-focus...

"The most common errors for detectors are:

detecting tree leaves or traffic lights in background as pedestrian
detecting the same person twice
not detecting small persons
not detecting cyclists"

oo er.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

Seems like all those issues are not the "detector's" issues, per se, assuming we define detector as the transmitter and receiver. The issue is what to do with the detections that must obviously have occurred, which is a processing problem. The target trackers for ballistic missile defense were capable of tracking hundreds of targets simultaneously, and the trick is to figure out trajectories of the targets and whether the trajectories will cross into the car's lane.

US football receivers running at their fastest would seem to be a plausible upper bound for "pedestrians" at about 28 ft/s. This would suggest that the lidar needs to have a frame rate on the order of 5 to 10 Hz to be able to correlate runners as single targets moving at a high rate. Slow targets might be the senior citizen in front of me in the supermarket, moving at about 0.5 ft/s. Usually, the big challenge isn't the targets, it's the obscurations, such as when a slow moving target walks behind a wall or billboard. A conventional tracker might get fooled into thinking the target came to stop at the leading edge of the obscuration and decide not to look for the target to re-emerge on the far side of the obscuration. Faster targets are less problematic with obscurations, but they aren't problem free.

Lidars have one serious limitation that makes the processing so difficult, and that's shadows, i.e., the areas behind objects that block further transmission of the laser and where pedestrians tend to suddenly emerge into traffic.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

I've seen suggestions that target /recognition/ might have been more difficult because she was behind the bike. Does this mean that Uber cars will intentionally run into things 1.5m tall and 2 m wide simply because it can't recognise them? I'd have thought not hitting large objects was pretty much AV 101.

This is different to the Tesla invisible truck problem because the Tesla system lacks a LiDAR, so it has to have good visual analysis.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

The bike was behind her. The only thing that would explain the behavior is if the AI classified the inputs as a vapor cloud. I would say that adding a thermal camera would be the best discriminator for such an event.

RE: Self Driving Uber Fatality - Thread I

"This is different to the Tesla invisible truck problem because the Tesla system lacks a LiDAR, so it has to have good visual analysis."

And it failed miserably at that. Change detection should have detected the sudden presence of a "overhead sign," and that alone should have been an issue. The fact that the "overhead sign" went below the clearance level of the car and it didn't conclude that was a problem, is a problem. The fact that it failed to detect the wheels and undercarriage of the truck as anomalies is also a problem.


TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Quote (IRstuff)

The police essentially aided and abetted Uber in potentially steering public opinion...

At least we can be thankful that Uber is such a model corporate citizen with no history of ethical misfires. They've always acted in accordance with only the highest moral principles. So we can rest assured that they'll fully cooperate, honestly and openly.

--

One of the mistakes that newbie or bad drivers can make is looking for obstacles ahead. The correct logic is to look for empty road ahead. (The wording here is a simplification, but I trust that the point is clear.)

Perhaps autonomous vehicles should be subjected to a blinding Sudden Fog Bank Test. Or a Blind Curve (with too generous speed limit) Test. Such testing should be complete with a brick wall final exam.

I'm not sure that this 'safe driving logic' point is related to what happened here. Although it seems to have driven straight into a non-empty road.

So far this accident seems inexplicable. Explanations offered so far are not merely 'lessons learned', but massive failures.

RE: Self Driving Uber Fatality - Thread I

I don't disagree that some sort of testing to verify a minimum threshold of capability needs to be performed, particularly after this.

However, I'm struggling with how trivial this scenario ought to have been. This is like worrying about a kindergartener running well, when they seem to have failed to tie their shoelaces.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Capability tests will be "gamed". The systems will be programmed to pass them. The problem is for the systems to react properly to situations that they were not explicitly programmed for, and those will be the situations that for whatever reason (sometimes seemingly inexplicable to humans) fell through the cracks. Situations like, oh, driving underneath an overhead sign board through the gap between the back of a truck and some other unknown moving object about 40 feet behind it, or failing to recognize a bicycle that is being pushed rather than ridden as something that perhaps shouldn't be hit.

The above statement that drivers (whether human or otherwise) should be aiming for empty road is an extremely important one. It's still not without its share of headaches.

Does a pothole disqualify empty road? A little pothole? A big one? A sinkhole? Where's the threshold between stopping/swerving and driving over or through it?

Does a piece of paper ahead disqualify empty road? A small piece of debris? A truck tire tread? A squirrel? A cat? A dog? A small deer? A moose? A small human? A big one? Where's the threshold? You do not want self driving cars dodging a plastic bag or stopping in a traffic lane of a motorway.

RE: Self Driving Uber Fatality - Thread I

"Capability tests will be "gamed". "

But, now that you know that they're going to want to game the system, there are other approaches to the problem, such as demanding source code and program memory inspection, or randomly selected scenarios. Even now, we demand that we can arbitrarily build executables in a traceable fashion, simply so that we can avoid other silly problems like repeatable builds.


The smog tests are absurdly simple compared the tests required of a target detection and tracking system.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

With the disclaimer "I Am Not A Software Guy" ... I cannot begin to imagine how complex and abstract the relationship is between the executable machine code and what the user sees. To debug that by looking at source code would be a herculean task at minimum.

It's one thing to look for a logic fault when a programmed system has a repeatable flaw and you have a clue where to look. "Oh crap, we have an OR rather than an AND between these two logic rungs." (been there!) It's quite another to have millions, possibly billions, of lines of code laid out with the task "Find all the problems with this."

How many times do you get "Windows Update" ...

Random test scenarios would have to be part of the picture, but it is the nature of statistics that they will not find every flaw.

Self-driving cars are essentially going through random test scenarios right now. This random test scenario found a bug.

And for those saying "this shouldn't be happening in public", I don't disagree, but at the same time, in controlled test scenarios, probably that pedestrian wouldn't have pushed that bicycle across the road in that manner under those lighting conditions.

RE: Self Driving Uber Fatality - Thread I

It's essentially an AI problem; at least the tricky bits are. Famously, "AI is hard", where 'hard' is a computer science keyword that isn't very distant from 'impossible'. This conclusion goes back decades.

Historically, AI has been 'an indoor cat', assigned to finite problems within defined problem spaces. Now it's being taken outdoors, where the problem space is unbounded. I expect that it will soon be realized that "AI Outdoors is VERY hard."

There's also the issue of sensors. It's hard to appear intelligent if you're oblivious to what's going on around you. Autonomous vehicles should have microphones to hear the sirens of emergency vehicles, but nobody seems to have thought of even that obvious example. Smoke, vibrations, sudden banging noises, screams of terror from the passengers, etc.; all should be inputs. Successful AI Outdoors will need a large range of sensors.

Given the wildly optimistic naivety, these sorts of accidents are not surprising. They'll continue, and lives and billions will be lost.

I expect that it'll be a bit like Fermat's Last Theorem. Yes, Wiles' [edited] 129-page solution certainly would not fit in the margin. When Autonomous Vehicles are finally fully sorted out (10+ years from now), they'll look back and then realize how the problem was so much bigger than they expected.

RE: Self Driving Uber Fatality - Thread I

I just saw that the LiDAR manufacturer introduced a new model late last year with 0.1 degree resolution and 300m range. It seems likely that the Uber would have had the previous generation model, at 0.4 degrees and 120m range. That means that 6 seconds from impact the woman with the bike would come into range and be a blob about 2 pixels by 2 pixels, in a picture 900 pixels wide. Braking time from 40 mph is about 2 seconds. The blob would be persistent and therefore easy to track. How on earth the software copes with blobs that move fast enough to have distinctly separate images in each frame I don't know. Obviously there's no hope of doing image recognition on a 2 pixel by 2 pixel blob (you could get fancy and use lots of frames of data to give better resolution).






Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

"...consecutive frames..."

I wonder what the frame rate of Lidar is? Being mechanically scanned (laser and spinning mirrors), I assume it's slow.

RE: Self Driving Uber Fatality - Thread I

The Velodyne 64E has a complex relationship between the upper and lower banks of lasers and the rotation rate. The elevation increment of approx. 0.4 degrees is constant; nothing else is except sample rate. Higher rate = lower resolution. It also seems to depend on which data is being accessed. I've only given it a cursory reading, but one thing that stands out is the sensitivity to reflectance; the spec indicates the limit for pavement might be only 50m based on a reflectance of 0.1.

See www.velodynelidar.com/lidar/products/manual/HDL-64...

One characteristic that I expected but did not find is the beam divergence.

RE: Self Driving Uber Fatality - Thread I

The angular resolution depends on the LiDAR's rps, it can scan at 5 10 or 15 rps. Long range resolution is then set out in Appendix B of that manual, and is substantially better than 0.4 degrees, at the lowest speed it is 0.1152 degrees and is proportional to the rps. So you can have 15 frames per second at 0.34 degrees resolution, or 5 frames per second at 0.12.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

The apparent movement is tangential from any viewpoint in the vehicle except the point(s) of collision. It will have a tangential component if either member to the collision is on a non-linear path or has a non-linear velocity regardless of view point.

RE: Self Driving Uber Fatality - Thread I

IRstuff, I think your IRstuff (Aerospace)22 Mar 18 02:16 image is misleading, I don't see any sign of vertical scanning as such in that manual, just an upper and lower bank with different fields of view. So the LiDAR map would just be a set of distances at the two different heights at the angular resolution? Am I missing something?

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

There are 64 individual lasers in the scanner; 32 upper and 32 lower, spread to cover a range of angles.

RE: Self Driving Uber Fatality - Thread I

3DDave beat me to it. Most commercial lidars are scanning in azimuth only, and use either an array of transmitters and receivers or a single fan-shaped transmitter beam and an array of receivers. An array of transmitters AND array of receivers seems way more complicated than I would hope for, but that does help out on the pulse repetition rate, which is the limit of the frame rate vs. resolution problem.

"To debug that by looking at source code would be a herculean task at minimum."
The first thing to do is to start with the recorded data and processor logs. Since they are in the testing phase, there should be copious amounts of both. If the data log is empty, heads will roll.

Note that we were referring to acceptance tests, not engineering tests. The engineering tests are performed by the supplier, and should involve a progression of tests starting at the smallest software module, and then progressing to ensembles of modules. Acceptance tests are not intended to exhaustively test functionality, just like IIHS or DOT tests only test specific things, which were gamed by VW and others. But, one can demand, justifiably, that a testing authority have access to the code, and witness the programming of such code, and tested with a series of random scenarios.

The HDL-64 has 0.4-deg vertical resolution and almost exactly 2-mrad horizontal resolution at 5-Hz frame rate, so at 20-m range, it would have 0.04-m horizontal resolution, which means there were something like 205 lidar returns from the pedestrian every 0.2 seconds at the instant her feet were visibly illuminated by the headlights. At that frame rate, even if she were moving at 4 mph, there would have been minimal horizontal separation between successive lidar return clusters. It should have been trivial for the object processor to determine that there was a moving object about to get hit by the car. Consider that in the 1.2 seconds from that point, there should have been at least 6 complete frames, and more than 1230 lidar returns from the pedestrian (actually way more, since the range was decreasing), it should have been impossible for the object processor to ignore that pedestrian.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Quote (3DDave)

It will have a tangential component if either member to the collision is on a non-linear path or has a non-linear velocity regardless of view point.

In this case, for a vision based system, given that we're apparently talking about a time interval of only about 2 or 3 seconds and neither was obviously turning, both motions are going to be effectively linear.

And the scale of the "point of impact" doesn't really help much except in the final too-late fraction of a second.

Greg touched on an interesting point for vision systems. A lack of apparent relative motion for objects on a collision course. At least until it's perhaps too late.

Vision systems would perhaps benefit from widely spaced cameras, indicating placement on the outside mirror housings.


RE: Self Driving Uber Fatality - Thread I

Oh, and if the sensors only have a range of 50m sometimes, the braking distance from 70 mph is 75m, in other words the 50m range lidar is unfit for purpose at 70 mph, even if the car can immediately recognise a problem. The more sensible alternative, swerving, may need less distance, but of course requires more situational awareness and skill. I guess this speed limit is why the L4 testing is being done in urban areas.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

A human wouldn't be issued a license (based on poor vision) if they couldn't see an obstacle (or better: lack of empty road) beyond 50 or 75 m.

RE: Self Driving Uber Fatality - Thread I

50m is for an asphalt type surface at a guess. I wonder where wool or other natural fabrics in a dark color fall?

Next cab off the rank (haha) is the radar system. Do these vehicles have them and what is the spec?

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

It will be interesting to read the NTSB report once it is released. What I am missing in all the information that has been published is simple things such as how many sensor are there of each kind, are we looking at 2 out 3 systems, what determines if a sensor is not working correctly? Is there a safety system / computer that monitors the computer that operates the car? What kind of redundant power supply exist for the computer(s)?

Many other questions but did the Uber stop and call 911 after running over the pedestrian?

It is hard enough to make chemical plants safe but at least they are not moving down the road at 70 mph. On the other, I think there are many things from the various safety analysis that are done in chemical plants that could be applied to these robot cars.

RE: Self Driving Uber Fatality - Thread I

Doing a bit of reading here.

A couple of quotes from this article: http://www.latimes.com/business/la-fi-uber-pedestr...
"Also on Monday, the auto-parts maker that supplied the radar and camera on the Volvo SUV that struck and killed the woman last week said Uber had disabled the standard collision-avoidance technology in the vehicle.
"'We don't want people to be confused or think it was a failure of the technology that we supply for Volvo, because that's not the case,' Zach Peterson, a spokesman for Aptiv, said by phone. The Volvo XC90's standard advanced driver-assistance system 'has nothing to do' with the Uber test vehicle's autonomous driving system, he said.
"Aptiv is speaking up for its technology to avoid being tainted by the fatality involving Uber, which may have been following standard practice by disabling other tech as it develops and tests its own autonomous driving system. Experts who saw video of the Uber crash pointed to apparent failures in Uber's sensor system, which failed to stop or slow the car as 49-year-old Elaine Herzberg crossed a street pushing a bicycle."
And
"Meanwhile, a top executive for the maker of sensors used on the self-driving Uber vehicle said she was 'baffled' as to why the tech-outfitted vehicle failed to recognize a pedestrian crossing the street and hit the brakes.
"Marta Thoma Hall, president of Velodyne Lidar Inc., maker of the special laser radar that helps an autonomous car "see" its surroundings, said the company doesn't believe its technology failed. But she's surprised the car didn't detect Herzberg.
"'Certainly, our Lidar is capable of clearly imaging Elaine and her bicycle in this situation,' Thoma Hall wrote in an email. 'However, our Lidar doesn't make the decision to put on the brakes or get out of her way.
"'In addition to Lidar, autonomous systems typically have several sensors, including camera and radar to make decisions," she wrote. "We don't know what sensors were on the Uber car that evening, if they were working, or how they were being used.'"

And meanwhile, an interesting take on the whole situation:
https://jalopnik.com/uber-has-no-damn-business-tes...

RE: Self Driving Uber Fatality - Thread I

This article contains the most detail about the car than anything else I've seen: https://techcrunch.com/2018/03/19/heres-how-ubers-... As has been mentioned a few times now, Uber had at least 3 sensor systems that should have detected the pedestrian. They are basically independent in their operation, and it's up the collision avoidance processor to make the decision about doing something, at which it failed miserable.

The jalopnik article is basically a rant, and however, or whatever, the writer feels about its business practice should not be confused with whether its technology is sound.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Quote:

Uber had at least 3 sensor systems that should have detected the pedestrian.

It's hard to be enthusiastic about trusting my life to that.

RE: Self Driving Uber Fatality - Thread I

I'm simply saying that there was nothing physically preventing the sensors from detecting the pedestrian at their maximum effective ranges. Since the car did not behave like it detected an obstacle at any time, I can't say for certain that all the sensors didn't fail simultaneously AND the processor failed to detect the fault condition.

Prior to this, I certainly would not have had any doubts about the performance of the sensors, given that scenario. Even a competitor was able use the crappy video released by the police to detect the pedestrian and the bicycle at the first instant they were fully within the headlight illumination; obviously, there could be gaming of that for other reasons.

I know what I would have flowed down as requirements for the sensors, and at that range, the probability of detection would be essentially be 99.9999%, since I would have required at least 99% probability of detection at 300 ft for a pedestrian. That would be 5 seconds for a car at 40 mph, and that would mean at least 25 frames in which the pedestrian was detected. The number of lidar pixels declaring detections would have been in the hundreds.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Quote:

Marta Thoma Hall, president of Velodyne Lidar Inc., maker of the special laser radar that helps an autonomous car "see" its surroundings, said the company doesn't believe its technology failed. But she's surprised the car didn't detect Herzberg.
"Certainly, our Lidar is capable of clearly imaging Elaine and her bicycle in this situation," Thoma Hall wrote in an email. "However, our Lidar doesn't make the decision to put on the brakes or get out of her way. In addition to Lidar, autonomous systems typically have several sensors, including camera and radar to make decisions," she wrote. "We don't know what sensors were on the Uber car that evening, if they were working, or how they were being used."
Sometimes execs should just shut their mouths. One the one hand, she doesn't know what sensors were being used or if they were working. Yet she make the boneheaded move of stating "our Lidar doesn't make the decision to put on the brakes or get out of her way." That statement makes the assumption her system was in the loop (maybe it wasn't) and that it failed to do its job. Lawyers LOVE that kind of "self-incrimination".

EDIT: On second read, this could be an issue with the writer/editor. Perhaps what she meant was her system does not have the control (i.e., "say-so") to put on the brakes, rather than "my system didn't recognize the danger". Difficult to say the way the article is written.

Dan - Owner
http://www.Hi-TecDesigns.com

RE: Self Driving Uber Fatality - Thread I

I interpreted her comment to be 'our systems send the information to the processor, which decides how to react'

The sensors are sensors, not processors.

RE: Self Driving Uber Fatality - Thread I

Me too. Sensors just produce data. That data is just an input to the system making the decisions.

RE: Self Driving Uber Fatality - Thread I

(OP)
MacGyverS2000,

An extreme case here is that Velodyne's LiDAR reported an image to the robot driver, and then reported a new image a tenth of a second later. The robot then identifies obstacles and moving objects. A LiDAR will have an on-board computer and should be possible to design one that identifies, tracks and reports objects. This may make it more difficult to integrate the output of multiple LiDARs and cameras.

--
JHG

RE: Self Driving Uber Fatality - Thread I

"A LiDAR will have an on-board computer and should be possible to design one that identifies, tracks and reports objects. This may make it more difficult to integrate the output of multiple LiDARs and cameras."

Not by design. At the root, a lidar collects a cloud of returns that simply contain range, azimuth, and elevation. A processor might be included that places the returns in their proper place in the world. Almost no lidars do target recognition, that is the province of the system processor, that integrates the radar and video data into the decision making.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

That's not a theory. It's a smokescreen. The sensor, if mounted with the base level, has an ~25 degree down angle range. From a 5 foot altitude that would be about 12 feet to the pavement before a spot on the pavement disappeared. Anything taller would be proportionally visible closer. This is a better view than from the driver's seat as blocked by the hood.

RE: Self Driving Uber Fatality - Thread I

Simple. They were ignored.

RE: Self Driving Uber Fatality - Thread I

Uber has already settled with the family out of court.

https://www.reuters.com/article/us-autos-selfdrivi...

----------------------------------------

The Help for this program was created in Windows Help format, which depends on a feature that isn't included in this version of Windows.

RE: Self Driving Uber Fatality - Thread I

This narrative that she "came out of the shadows" really bothers me. I heard it repeated on NPR today. That video has been a mixed blessing from a PR standpoint as it pertains to fault for this.

RE: Self Driving Uber Fatality - Thread I

"That video has been a mixed blessing from a PR standpoint as it pertains to fault for this."

That's nonsense, I have no doubt that Uber allowed the police to have the video specifically to sway the public into thinking that the accident was unavoidable. The cited article about the settlement exactly describes what Uber had hoped people would think; "when the headlights suddenly illuminated Herzberg in front of the SUV." The person who was thinking on their feet and released that video is going to get a huge bonus at Christmas time.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

I just meant it was a mixed blessing in that it was pretty graphic and shows their product causing the end of someone's life. That, in and of itself, has some inherent downside to it. I agree that on the whole it has worked in their favor though.

RE: Self Driving Uber Fatality - Thread I

(OP)
Spartan5,

One of my theories is that the camera contrast ratio was not sufficient to transition from dark areas to fully lit areas. People are posting stuff here that shows that cameras are available that have the contrast ratio. That is not good for Uber.

--
JHG

RE: Self Driving Uber Fatality - Thread I

That assumes that:
a) it even needed a high contrast resolution
b) there was even a high contrast situation

The other dash cam videos show that a high contrast situation didn't even exist, so I'm tempted to think that Uber released video that was purposely altered in contrast to make it appear as it the accident was unavoidable. BUT, it wasn't because the radar and lidar which were supposedly installed on this car don't require ambient light at all, i.e., had there been total darkness, the pedestrian should still have been detectable. Had there been a searchlight blinding the camera, the accident ought not have occurred.

The fact that people are lamenting the video is a strong indication of how big a bonus the person at Uber who released the video will be getting this year.

The video is completely and totally irrelevant to the collision that the car ought to avoided with ease.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

The video proves she was on the road and was travelling in a constant direction so the sensors had an unobstructed "view" of her for enough time that the accident should never have occurred.

It could be said the video shows the backup driver might not have been able to react, but I'm not buying that the backup driver could only we what that video shows. The glances the backup driver was giving might be indicative of how far she could see ahead, maybe not clearly but still with some visibility.

By the reports it didn't seem like much time elapsed between the crash and the police viewing the video.

RE: Self Driving Uber Fatality - Thread I

So, the "design plan" of these self driving systems is not looking for physical objects in the path of the car but rather to only deal with things that are classified as objects that can be in the path of the car?

RE: Self Driving Uber Fatality - Thread I

There needs to be a lot more development and testing before an AV is allowed on a public roadway. I think it will become very obvious in any remotely realistic "real world scenario" that the AI is no where near sophisticated enough to process the massive amount of input presented by the real world. I don't believe AI will ever be able to do the extrapolation required to drive as well as a human CAN. Obviously, even what's available now surpasses what human drivers sometimes DO, but "better than an oblivious idiot" is ridiculously low and unacceptable standard.

Rather than attempting the impossible with AI, if the the LIDAR, etc. were incorporated into regular automobiles, so that drivers could "see" what is not illuminated by the headlights,that would actually improve safety. Especially if such systems were incorporated as a heads-up display, showing the objects where they are from the driver's perspective. Some already have thermal imaging, but there is so much more that could be done.

RE: Self Driving Uber Fatality - Thread I

"I don't believe AI will ever be able to do the extrapolation required to drive as well as a human CAN"

I probably agree, but the point is irrelevant. On average human drivers don't perform anything like as well as 35-60 year old human drivers. So by your logic nobody younger than 35 or older than 60 should drive. If AVs are (for the sake of argument) 4 times safer than the average human driver, that would be a big step forward. They still wouldn't be much safer than good drivers. But that'd be a big step forward for traffic fatalities.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

Some of these accidents (system design failures) clearly show that the claimed capabilities have been over-hyped.

Hopefully this doesn't continue until Autonomous Vehicles becomes another chapter in the Engineering Ethics textbooks.

RE: Self Driving Uber Fatality - Thread I

"So, the "design plan" of these self driving systems is not looking for physical objects in the path of the car but rather to only deal with things that are classified as objects that can be in the path of the car?"

They're basically the same thing. We "classify" objects based on what our senses (sensors) tell us, which is why we are often misled by optical illusions. Almost all optical illusions are based on classification quirks and shortcuts of our vision processing in the brain. Looking for range returns from lidar or radar and shapes in the camera system is what allows the AI to determine whether it's detecting a physical object or not.

However, this is was not the case of some oddball optical illusion misleading the processors, unless the pedestrian was somehow in a "stealth" mode or cloaked with a Klingon cloaking device. The processor did something anomalous, maybe like us having a stroke.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Romulan cloaking device.... Geesh. winky smile

RE: Self Driving Uber Fatality - Thread I

Maybe a black cow cloaking device. Seems to fool many human drivers every year.

Human pedestrians have some ability to improve their ability to be seen, although it's not very common that they do so.

Dark colored clothes, no reflectors on the bike, crossing outside normal crossing zones, etc. is not the complete fault of the driver.
Same thing as warp factor 10 drivers with no lights at night, in black cars.

I think at some point people need to understand they need make an effort to be seen.

Not that I trust self driving cars either.

RE: Self Driving Uber Fatality - Thread I

Certainly, situational awareness is something that everyone should cultivate. Regardless of whether the Uber car was behaving correctly, assuming that the car was going to stop or otherwise avoid the collision would have been foolhardy. This pedestrian certainly seemed to be oblivious to their impeding doom, and had they been more mindful, they might have avoided the encounter. But, had they done that, we wouldn't know about this failure until much later or in a much worse situation.



re. cloaking device -- In Star Trek Discovery, the Klingons are using a cloaking device at least 10 years before they supposedly got the technology from the Romulans, and before Kirk encountered the Romulans with their cloaking device.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

No, they are not the same thing. It seems like neither car was doing any kind of operation that simply detected a physical impediment in it's path, regardless of what it was classified as.

RE: Self Driving Uber Fatality - Thread I

Quote (LionelHutz)

It seems like neither car was doing any kind of operation that simply detected a physical impediment in it's path, regardless of what it was classified as.

The pedestrian which was struck was not in the car's path until a very short time before it was struck. Despite what you might think, the reaction time of an autonomous vehicle is not on the millisecond scale. Autonomous technology is not 'better' than human drivers (if you believe it is better at all) because of speed.

Quote (LionelHutz)

So, the "design plan" of these self driving systems is not looking for physical objects in the path of the car but rather to only deal with things that are classified as objects that can be in the path of the car?

Objects have to be detected before they can be classified. Yes, these cars are constantly scanning for obstacles along the planned travel path.
Detection and reaction become difficult for a lot of reasons- some of them covered in this thread, and some not. Without knowing the exact technology employed in Uber's system, it is hard to know what might have gone wrong; but I can tell you to a certainty that at the current level of autonomous vehicle technology, even with cutting edge equipment and cost-no-object engineering applied, expecting a vehicle to operate with a complete understanding of EVERY moving object in its surroundings and completely accurate evaluations of those objects trajectories, is wholly unrealistic.

There is a reason these cars are running around with people behind the wheel.

RE: Self Driving Uber Fatality - Thread I

2

Quote (jgKRI)

There is a reason these cars are running around with people behind the wheel.

That doesn't seem to be very useful. This operator clearly was not paying attention and the Tesla drivers have not been either. I think it's very difficult to stay vigilant when you have been relieved of all responsibilities except the life saving last second over ride.

I think all these self driving vehicles should be proceeded by a man on foot waving a red flag.

----------------------------------------

The Help for this program was created in Windows Help format, which depends on a feature that isn't included in this version of Windows.

RE: Self Driving Uber Fatality - Thread I


GregLocock: "If AVs are (for the sake of argument) 4 times safer than the average human driver, that would be a big step forward."

Perhaps, but perhaps not. If a human driver does something really stupid and irresponsible, at the very least that person's license is revoked; they are taken off the roadways. If one AV does something stupid, do all of the AVs using the same programming get banned from the roadways?

There's also a long way to go before an AV comes close to being as safe as a good driver. I believe if AI ever becomes powerful enough to process all the input from a real world situation, or sophisticated enough to separate the important input from the extraneous, we will have created a greater danger than we have mitigated (the Terminator movies come to mind).

I think the expansion of technologies that ASSIST the human driver, rather than replacing the driver, is a much better and more attainable way to improve traffic safety. It may not be the money-saving opportunity for Uber or UPS, like AVs are, but when did making Uber profitable become a public concern?

GregLocock: "But that'd be a big step forward for traffic fatalities."

IF your assumption could be proven true, perhaps there would be an overall reduction, but proving such a system to be not only smart enough, but reliable enough, is a tall order. What happens when some terrorist finds a way to hack into them?

There's also the matter of liability. Who will be responsible the next time an AV fails to recognize a pedestrian in the roadway, or a truck crossing the vehicle's path? Tesla backed off of their claims rather quickly, claiming their "auto-pilot" is only a driver assistance feature, not intended to be autonomous.

RE: Self Driving Uber Fatality - Thread I

"I think all these self driving vehicles should be proceeded by a man on foot waving a red flag."

I wouldn't want to be that guy.

RE: Self Driving Uber Fatality - Thread I

Quote (dgallup)

That doesn't seem to be very useful. This operator clearly was not paying attention and the Tesla drivers have not been either. I think it's very difficult to stay vigilant when you have been relieved of all responsibilities except the life saving last second over ride.

I would very, very strongly agree with you.

But such is the current state of the technology.

RE: Self Driving Uber Fatality - Thread I

(OP)

Quote (HotRod10)



Perhaps, but perhaps not. If a human driver does something really stupid and irresponsible, at the very least that person's license is revoked; they are taken off the roadways. If one AV does something stupid, do all of the AVs using the same programming get banned from the roadways?

If some safety feature on aircraft are known to not work, the aircraft are grounded.

Ignore the technology for the moment. There are supposed to be five or six categories of intelligent car. From a legal point of view, I see two.

  1. The car is equipped with a microphone and/or keyboard. You get in and you tell it where you want it to do. It takes you there. If the vehicle causes an accident, the manufacturer of the vehicle is responsible.
  2. You are the driver. You grip the steering wheel. You control the gas (power?) pedal and brakes, and you are occupied full-time paying attention to driving. The robot, if present, is a back-seat driver, with some ability to nudge controls.
In the first case, I cannot see you being allowed to own the vehicle. If I were the manufacturer, I would own the car, and the maintenance facility. Anything with machinery or controls would inside a locked enclosure.

The safety observer is not of much use if they are not in control and continuously paying attention. Accidents happen way too quickly for an observer to look away from a book or movie.

--
JHG

RE: Self Driving Uber Fatality - Thread I

jgKRI - That is not what I'm talking about. These failures have involved objects that don't have to be classified or direction determined or any other self driving BS like that. They were hard physical objects directly in the path of the cars.

A simple long range radar automatic emergency braking setup could have easily braked before hitting the barrier that Tesla hit and should have also been capable of emergency braking the Uber for 1-2 seconds before that impact.

Not being able to avoid a concrete and steel barrier directly in the path of the Tesla rather bluntly shows how piss poorly the Tesla self driving system is currently working. Sure, it works well a lot of the time, but I bet that's not much consolation to the families of the victims.

RE: Self Driving Uber Fatality - Thread I

The Autonomous Vehicles of today seem to be working *worse* in terms of basic collision avoidance than the Collision Avoidance systems of several years ago. Then again, those infamous Volvo failed demonstrations were Collision Avoidance systems that didn't. One crashed straight into the rear of a stopped truck, precisely the way it wasn't supposed to. The other ran into a manager (?) that had instructed the driver to run straight at him. Videos on YouTube.

It's advisable to maintain a Confidence/Competence Ratio well below unity when dealing with Safety Systems. That doesn't seem to be happening here.

RE: Self Driving Uber Fatality - Thread I

drawoh - Airbus aircraft were not grounded when a problem with the pitot heaters was found, leading directly to AF 477 crashing for no reason. Airbus aircraft were not grounded when it was found that pilots had too much control authority after one pilot tore off a vertical stabilizer. It requires a near certainty of a failure that results in hull loss to get a plane grounded.

RE: Self Driving Uber Fatality - Thread I

"In the first case, I cannot see you being allowed to own the vehicle. If I were the manufacturer, I would own the car, and the maintenance facility. Anything with machinery or controls would inside a locked enclosure."

That would be a truly autonomous vehicle, which would have to be able to handle all situations; not just highways, but congested city streets with hundreds of other cars and pedestrians in close proximity. As the topic story demonstrates, these vehicles currently fail to operate safely under much less demanding circumstances.

"You are the driver. You grip the steering wheel. You control the gas (power?) pedal and brakes, and you are occupied full-time paying attention to driving. The robot, if present, is a back-seat driver, with some ability to nudge controls."

That's essentially the driver assist safety features we have now, other than most just produce warnings to the driver, which the driver must decide how to react to. Autonomous braking is a good feature, but I'd be leery of autonomous obstacle avoidance, etc.

RE: Self Driving Uber Fatality - Thread I

Quote (LionelHutz)

jgKRI - That is not what I'm talking about. These failures have involved objects that don't have to be classified or direction determined or any other self driving BS like that. They were hard physical objects directly in the path of the cars.

Yes it is- without you knowing.

These systems cannot differentiate between objects which are permanently affixed to the ground and objects which are not. They detect objects as a single snapshot in time, take another snapshot, and compare- and then estimate relative velocities and positions and potential threats.

A human can very readily differentiate between a car and a k-rail. An autonomous vehicle, with current technology and engineering on board, cannot do so nearly as accurately.

We know very, very little about this Tesla Model X fatality- so I can't speak intelligently on exactly what happened. But what you're saying, that every object should be instantly and accurately identified without a single failure, ever, is the exact problem that is yet unsolved. Because it is extremely difficult.

Quote (LionelHutz)

A simple long range radar automatic emergency braking setup could have easily braked before hitting the barrier that Tesla hit and should have also been capable of emergency braking the Uber for 1-2 seconds before that impact.

Maybe so. But you don't know the full set of circumstances which caused this failure, so we don't know what other systems could have functioned more reliably.

Quote (LionelHutz)

Not being able to avoid a concrete and steel barrier directly in the path of the Tesla rather bluntly shows how piss poorly the Tesla self driving system is currently working. Sure, it works well a lot of the time, but I bet that's not much consolation to the families of the victims.

We have, at this point in the Tesla's history, a couple of fatal failures (that I'm aware of) in a data set of literally millions of miles.

So that we're clear: I am no fan of this technology being let out of the gate at the infantile stage it is in. I think we are many years away from the vision of theses companies actually being technically possible (i.e. a car with no steering wheel that doesn't kill people).

With that said- ANY system WILL have failures and people WILL get killed as a result. This isn't fatalistic or pessimistic; it's a statistical fact.

Failures like this need to be evaluated with all the speed and intensity available to the NTSB or whomever else performs the work; but until the general public understands engineering to the point where they do not expect the long term failure rate to be zero (good luck waiting for that...) than the public will continue to cry out when these systems fail. Which they will continue to do until the end of time.

RE: Self Driving Uber Fatality - Thread I

"A simple long range radar automatic emergency braking setup could have easily braked before hitting the barrier that Tesla hit and should have also been capable of emergency braking the Uber for 1-2 seconds before that impact."

Actually, this is not a "simple" thing. Take a 35 GHz radar with a 6-inch aperture. The blur spot is 7.9 degrees, which at 300 ft is 12.5 meters. This is why a lidar is would be doing the bulk of the target detection, under normal circumstances, since its beam footprint at 300 ft is 14.4 inches. The larger footprint means that there will be a large mixture of returns from all over the footprint, possibly making it difficult to get good answers without aiding from lidar. The lidar's weakness is heavy fog and/or precipitation, and possibly poor performance against sunlit surfaces and particulates in the air. Our obstacle avoidance system tended to have challenges flying against sun-backlit on-shore flow. It took a couple of design iterations to design a thresholding circuit that wouldn't just pin to a high threshold, thereby reducing detection range.

The receiver operating curve (ROC) for both the radar and lidar is a trade between 100% detection and an absurdly high false alarm rate, and a decent false alarm rate with tolerable detection probability. Every system in use makes such a compromise, which is another reason why Musk is smoking something when he thinks he can get away without something to back up his cameras.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Quote (IRStuff)

Musk is smoking something when he thinks he can get away without something to back up his cameras.

This in an interesting sub-topic unto itself.

For all of his knowledge, I think Musk's point of view truly is "humans do this job pretty much competently most of the time without LIDAR, so all we need to do is make the visual image processing we are doing as robust as that of the average human brain"

Which is something that might happen within my lifetime (I'm 32) but I wouldn't expect to see it before I'm retired at the earliest.

The task is simply monumental.

RE: Self Driving Uber Fatality - Thread I

It seems we have comprehension problems here.

I DID NOT SAY EVERY OBJECT NEEDS TO BE TRACKED AND IDENTIFIED IMMEDIATELY. I POSTED THE EXACT OPPOSITE TWICE NOW. If the forward facing radar keeps returning a rapidly approaching result DIRECTLY in front of the car then it's likely a good idea to start braking at some point, regardless of what stupidity the AI is doing at the time.

Last I checked, the single task AEBS systems are a lot simpler than a full self driving setup. They wouldn't have to start emergency braking anywhere close to 300m from a concrete barrier to avoid smashing into it at full speed. LOTS of AEBSs are on production cars and working fairly successfully without LIDAR units. But sure, why not go ahead and twist the sentence I wrote completely out of context?

As for Musk and his cameras. My first though seeing that Tesla crash was that they turned off the radar to test with cameras only again.

RE: Self Driving Uber Fatality - Thread I

My guess for L5, or widespread L4, is 2035.

On average AVs will never be as safe as good meat drivers, because good meat drivers don't crash (say involved in collisions that cause non trivial injuries or deaths) ever in their lives. That is not the objective. The objective is to be somewhat better than the average driver. This is difficult because 95% of drivers are better than average, according to them. That's why you have to use stats, not opinions.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

"If the forward facing radar keeps returning a rapidly approaching result DIRECTLY in front of the car then it's likely a good idea to start braking at some point, regardless of what stupidity the AI is doing at the time."

Our eyeballs are sensors, but it's the brain that makes sense of the detections. A radar will always detect something DIRECTLY in front of the car, namely, the pavement. It's the job of the processor to determine that those returns are either just the pavement, or a curb, or a k-rail. Again, the radar's footprint is huge, and it's the processor that weeds through the clutter to find the actual, real, targets, not the radar.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

FFS, can you be a bigger nitpicker?

Would radar subsystem be acceptable to you? Maybe radar backup system would be acceptable to you? The override the AI and stop the damn car system?

The stupid thing is that to get a radar result requires some kind of processor involvement. The acronym radar refers to the whole system from pulse generation to echo receiving to processing - RAdio Direction And Ranging. So, why do I have to spell out that there needs to be a processor to get a result from the radar?

It seems you've confused a radar system with the echo that is detected by the receiver.

RE: Self Driving Uber Fatality - Thread I

Here's an interesting video that's reportedly trending right now.
https://youtu.be/6QCF8tVqM3I?t=20s

It's pretty clear that these systems are not quite ready for prime time.

RE: Self Driving Uber Fatality - Thread I

A human driver would resolve this by looking further ahead.

RE: Self Driving Uber Fatality - Thread I

AI cars don't have complete radar systems, they have radar sensors who pass the range data to the AI processor, which also receives the lidar and video data. There may be pre-processors that clean up the data or do specialized detection, but no processors, and certainly no connections to the car drive system, which is handled by the AI processor. Having multiple processors generating conflicting commands is a recipe for disaster.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

"Having multiple processors generating conflicting commands is a recipe for disaster."

Ya, why introduce a backup when the main processor is doing such bang up job at avoiding objects in front of the car? Tesla is proving their AI processor doesn't have that type of basic functionality, but instead is only reacting to the objects based on their classification. Not sure on the Uber yet, but it's likely the same.

RE: Self Driving Uber Fatality - Thread I

2
"The pedestrian which was struck was not in the car's path until a very short time before it was struck. Despite what you might think, the reaction time of an autonomous vehicle is not on the millisecond scale. Autonomous technology is not 'better' than human drivers (if you believe it is better at all) because of speed."


The pedestrian was clearly on an intersecting trajectory, and if the autonomous technology cannot predict this, it is useless.

RE: Self Driving Uber Fatality - Thread I

The video that I linked just a few posts above, is interesting. The left route option (which it appears to be following) seems to have two lanes. But it appears that the AV selected the *3rd* (rightmost) lane of the *two*. This hints that the system doesn't even bother to cross-check its decisions against the maps; maps which are available on-board (within the normal navigation system).

In other words, the system selected 'Lane 3' (rightmost non-lane median) of the two lanes available. Maps that would show these lanes (and thus non-lane medians leading to a barrier) would presumably already be on-board the vehicle, but presumably they're not integrated.

RE: Self Driving Uber Fatality - Thread I

VE1BLL - it appears it "lost" track of the right dotted line and started only following the solid white "lane" marking on the left so probably expected it was still in the left of 2 lanes. It was flashing a white ring around the cluster which I think is the initial warning to to the driver to put their hands back on the wheel. I didn't see any indication in lane display of an object in front or a collision warning or that it engaged the emergency braking. The white lines on each side of the lane display do change indicating what lane markings it is following, or not as the case changes.

I've seen comments about the need to improve lane markings for the self driving cars. But blaming poor lane markings for failure of the system isn't a very useful or practical solution overall considering maintaining bright clear lane markings isn't a priority in many areas. That video illustrates how very far away the systems are from reliably working in more challenging situations.

RE: Self Driving Uber Fatality - Thread I

Cross-checking against navigation isn't a strategy that can be fully relied upon, either. Construction or other emergency situations can alter the exact path of lanes in a flash ... faster than the navigation maps could be updated.

A human driver should have sorted this out by looking further down the road and establishing the difference between an actual travel lane and the "bull nose", and further establishing which travel lane was the desired one, without having to resort to a (potentially out of date) navigation system.

A road near me is under construction at the moment, being widened. One day all the travel lanes are to the north of what will be the new central divider, the next day they're all to the south, the next day the east and west lanes are in their proper positions with respect to the central divider but only the outer lanes are open, the next day only the inner lanes are open ... you never know what you're going to get. Humans have little difficulty navigating the path through the construction barriers and arrow signs ... provided they look far enough ahead.

True self driving systems will have to figure this out ... and they will also have to correctly respond to the occasional presence of a policeman directing traffic whilst the construction workers reposition the barriers!

RE: Self Driving Uber Fatality - Thread I

Improved lane markings are not gong to help if it's snowing.

----------------------------------------

The Help for this program was created in Windows Help format, which depends on a feature that isn't included in this version of Windows.

RE: Self Driving Uber Fatality - Thread I

dgallup - that just means more snow days because your car isn't able to get you to work. You'd also have to stay home any days it might start snowing or else you'd get stuck at work.

RE: Self Driving Uber Fatality - Thread I

(OP)
Here is a thought. The victim seems to have stepped out in front of the vehicle. This seems rather unlikely, although not impossible. Could they both have been dodging each other, in the same direction? I have heard it claimed that if headed for a pedestrian, you maintain your direction and let them jump out of the way. This is a very complicated problem in AI.

Getting on the brakes is the obvious solution, but I lived all my life in a city and I am an habitual jaywalker. I expect cars to move at a steady speed. This makes for more fun with AI.

--
JHG

RE: Self Driving Uber Fatality - Thread I

"...been dodging each other...?"

The available video doesn't seem to show that.

The pedestrian appears to be crossing the street in a fairly normal manner as far as can be seen.

Well, 'normal' except 1) not in a crosswalk, and 2) bizarrely trusting of the oncoming vehicle.

---

"...headed for a pedestrian, you maintain your direction and let them jump out of the way."

The advice about heading straight towards the obstacle is normally confined to Stock Car Racing, where the spinning vehicles in front are just as likely to have moved left or right. I've heard that one before.

But I've never heard of this advice (?) being applied to pedestrians, due to the obvious risk of 'Vehicular Manslaughter' being the outcome.

RE: Self Driving Uber Fatality - Thread I

Quote (LionelHutz)

Ya, why introduce a backup when the main processor is doing such bang up job at avoiding objects in front of the car? Tesla is proving their AI processor doesn't have that type of basic functionality, but instead is only reacting to the objects based on their classification. Not sure on the Uber yet, but it's likely the same.

2 accidents of this type from a dataset of millions of miles driven does not indicate a lack of basic functionality.

Quite the opposite, in fact.

Quote (TenPenny)

The pedestrian was clearly on an intersecting trajectory, and if the autonomous technology cannot predict this, it is useless.

The situation is much, much more complicated than that.

How does the system differentiate between a person walking on an intersecting sidewalk, who is going to stop before stepping off the curb, and a person who isn't? The answer is- it can't.

If the problem you see with this incident is the system not detecting the pedestrian, you're missing the entire point. The giant problem facing the designers of these systems is NOT detecting objects- it is classifying them, predicting their paths, and knowing when their paths need to result in an altered vehicle trajectory.

This problem is much, much more difficult than you are (and LH) are making it out to be.

The system doesn't know reliably how many lanes are on the road its on; I suspect the system detected this woman and 'predicted' that she was on a sidewalk, and that her trajectory would stop before intersecting with that of the vehicle. (this is purely conjecture on my part, but based on some idea of how these systems work).

RE: Self Driving Uber Fatality - Thread I

"Ya, why introduce a backup when the main processor is doing such bang up job at avoiding objects in front of the car? Tesla is proving their AI processor doesn't have that type of basic functionality, but instead is only reacting to the objects based on their classification. Not sure on the Uber yet, but it's likely the same."

A backup processor is not the same as using the radar and AI separately, since the radar is useless without the sensors for navigation, speed, steering angle, etc. A backup processor would be more like a redundant processor, fully capable of performing the navigation on its own, but possibly with a completely alternate software architecture and algorithms. Using the radar by itself could result in things like dust, smoke, and mylar balloons causing the car to abruptly stop, without any regard for whether it's safe to do that, as opposed to changing lanes. Let's not forget that the car had two other sensors, the lidar and camera array, both of which should have independently detected the pedestrian. I think everyone is making too big a deal over the concept of "classification." All three of the primary sensors should have detections of the pedestrian that should have been processed as a solid, slow moving, object that was on a collision course.

The Uber was not dodging anything because it would have tried to change lanes to outrun the pedestrian, and it certainly would have had enough time to change to the right lane and avoid the collision. And while an AI might not have millisecond response time, it should be much better than that of a human, particularly since it has precise knowledge of closing speed to obstacles.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Lots of cars have an AEBS that uses radar sensing ONLY. Trying to claim that radar can't be used by itself to EMERGENCY BRAKE for an object in front of the car is complete and utter nick picking stupidity at it's finest.

The Uber was presented with an object that was clearly visible to it's sensors for plenty of time before the impact occurred. Watching the video, it appears to have failed completely at EVERY part of the classification and tracking process.

The classification part IS important. The first step of the systems seems to be classifying objects so the systems knows what the object might do based on it's classification. If a woman is classified as a bush then the system can "safely" ignore her because a bush doesn't enter the roadway. Same with the Tesla and the truck. Once the trailer was classified as an overhead sign it could be ignored since it doesn't need to be avoided.

The Uber WAS in the right lane the whole time.

RE: Self Driving Uber Fatality - Thread I

Quote (LionelHutz)

Lots of cars have an AEBS that uses radar sensing ONLY.

Yes. Those systems are intended for detection across a very narrow, very specific range of circumstances. There's a reason why they can be so simple.

Quote (LionelHutz)

Trying to claim that radar can't be used by itself to EMERGENCY BRAKE for an object in front of the car

No one is claiming that. Systems which do that already exist. What matters is that functionality as part of a system. It seems clear from your posts that your base assumption is 'widget A does task X when left by itself to do task X. Widget A should then be able to do task X without an issue while also doing tasks Y, D, P, M, and Q'.

It just doesn't work that way. The things that an autonomous vehicle's radar systems are being asked to do are a LOT larger in quantity and complication than those being asked of the radar system for automatic cruise.

You seem to think you're comparing apples to apples, but you just aren't.

Quote (LionelHutz)

is complete and utter nick picking stupidity at it's finest.

Be nice. If you can't keep yourself from taking this so personally, maybe it's time to move on to another thread.

Quote (LionelHutz)

The Uber was presented with an object that was clearly visible to it's sensors for plenty of time before the impact occurred. Watching the video, it appears to have failed completely at EVERY part of the classification and tracking process.

Visible to its sensors, likely. Known to be on a path which would, without a doubt, intersect with the vehicle's path and make an emergency braking event the best course of action? Much, much less clear. The thing you are shouting about being the problem isn't actually the problem.

Quote (LionelHutz)

The classification part IS important. The first step of the systems seems to be classifying objects so the systems knows what the object might do based on it's classification. If a woman is classified as a bush then the system can safely ignore her because a bush doesn't enter the roadway. Same with the Tesla and the truck. Once the trailer was classified as an overhead sign it could be ignored since it doesn't need to be avoided.

That's not how these systems work. This is a computer, not a human brain.

Your brain sees a bush and says "that's a bush, bushes don't move, I don't have to worry about that bush leaping in front of my car".

The autonomous vehicles system's don't recognize a bush; they see an object, and that object is either in motion relative to the vehicle's predicted path, or not. If the object is moving, it is either likely to stop moving, or not. If the object is stationary, it is either likely to start moving again, or not.

This means that the system is constantly assailed with objects fixed and in motion around it- in an urban environment, there are likely to be hundreds of them at any one time. The system has to know which ones represent threats which call for avoidance, and which ones do not. If you don't understand how intensely complicated it is to figure this out, I don't know what to tell you.

If all the system had to do was figure out if a moving object was going to intersect with the car's path, and then brake whenever that happened, the system would be relatively simple and the car would never go anywhere because every car is constantly surrounded by things which MIGHT intersect with its path.

Any time a car changed lanes nearby, a plastic bag blew across the road, a bird crossed in front of the car, a squirrel paused in the divider, a leaf blew around in the air, a pedestrian approached the road, blah blah blah- these events would ALL cause the car to stop. And the car would be useless.

The problem these systems are trying to solve is determining 1) what objects from the set of detected objects constitute a threat and 2) what objects, if any, are likely to deviate from expected behavior (such as not entering the travel lane).

These operations are, by definition, predictive. Predictive operations will ALWAYS have some failure rate, and that rate will always be non-zero.

RE: Self Driving Uber Fatality - Thread I

I was reminded of some of those difficulties yesterday, while driving through a parking lot, and large groups of birds were in front of me, but moving out of the way as I drove through.
In years past, we would drive through the Eisenhower Tunnel around Thanksgiving. Normally, the roads would be all snowy, no visible markings, and the three marked traffic lanes would actually have two lines of cars sort of straddling the lanes. I pity the programmer that has to work through that little problem.
I've noticed on the local freeways, that truck drivers generally know which lane they need to be in ahead of time, since a good many of them drive the same routes repeatedly. That is very useful information that ought to be incorporated into any autonomous driving system. I know my commute route, and know which lane is slower when, where people are going to be merging, where there is no acceleration lane, etc.
One thing I've noticed is that in the few autonomous accidents, they tend to be severe. A lot of times, when people crash, they react too late, but still manage to shave a lot of speed off or head towards an open spot immediately prior to the crash. But when these autonomous systems don't sense something, they just plow into it at full speed, which is not too pretty.
It occurs to me that if you have an autonomous system that drives safely and carefully to the destination, a goodly percentage of current drivers will be unhappy with it- they're the bozos that are swerving in and out of lanes now. So they need to come up with a slider control in those cars that has "Prudent Patient Driver" at one end and "A-Hole Crackpot" at the other end, so people can adjust it to meet their perceived needs. Otherwise, you'll have some third-party software vendors that jump in to meed that demand.

RE: Self Driving Uber Fatality - Thread I

The cars DO classify various objects by type. Every company involved describes how their system classifies each "thing" that is detected by type such as car, tree, bike, sign, etc, etc so the system can then predict what level of threat that "thing" poses and what that "thing" might do in the future. Tesla themselves said the Tesla that went under the transport trailer classified the trailer as an overhead sign and overhead signs aren't a driving threat hence it drove under it regardless of the fact it was in the path of the car.

It's hard to be nice to people twisting my responses. I did not post to brake at any object that might cross into the cars path. I did not post that the parts work independently. I posted that it'd be a damn good idea to brake when any solid object IS in the path of the car. The system doesn't have to do any predictions about objects it will very shortly be interacting with. What did the Telsa system have to predict about the big concrete lane divider barrier which was dead ahead? If it might move out of the way in time?

RE: Self Driving Uber Fatality - Thread I

Quote (LionelHutz)

that it'd be a damn good idea to brake when any solid object IS in the path of the car.

This is exactly what I'm saying. You do realize that if the radar system was able to override autonomous functions and emergency brake every time it detected a stationary object in front of the car, that the system would not function and the car would never move?

Prediction of what objects will do based on a static snapshot in time is very, very hard.

Prediction of what objects will do based on a static snapshot in time is what these systems are tasked with in order to operate safely.

Quote (LionelHutz)

What did the Telsa system have to predict about the big concrete lane divider barrier which was dead ahead? If it might move out of the way in time?

Uh.. actually, yes. It most likely detected the barrier and predicted, incorrectly, that it was either out of the car's trajectory or would move out of the car's trajectory before it got there.

Stationary objects being 'revealed' when cars between the autonomous car and the object are a difficult problem to solve- because at first glance from the system, they look like moving objects.

There was a similar incident a while back with another Tesla, where it struck the back of a stationary fire truck after another car changed lanes to miss it. That situation is difficult for the system to resolve. That incident didn't make much news (that I'm aware of) because no one was killed.

Quote (LionelHutz)

Tesla themselves said the Tesla that went under the transport trailer classified the trailer as an overhead sign and overhead signs aren't a driving threat hence it drove under it regardless of the fact it was in the path of the car.

No, they didn't- the mad genius himself tweeted this:

https://twitter.com/elonmusk/status/74862597927104...

That does not mean that inside the control system the car says "OH THAT RIGHT THERE IS AN OVERHEAD SIGN".

It means that the system is tuned to ignore radar hits similar to those produced by overhead signs.

This is a very different thing. If you don't see the difference, and think this is nitpicking.... that's an indication that you're out of your depth to be in a conversation about how these systems work.

Your posts demonstrate a pretty naive point of view about what these systems are actually capable of. What you are saying should be easy is, in fact, immensely hard. No one is 'twisting' your responses. You're saying things that sound a certain way, and getting upset when people take your posts either literally, or to their logical ends. Calling people stupid isn't going to fix your inability to properly communicate whatever it is that you're actually trying to say.

RE: Self Driving Uber Fatality - Thread I

I see your point, an autonomous vehicle shouldn't be able to predict people crossing in front of it, and should also not be able to notice a concrete abutment in its path, because, apparently, that's hard.

Math is hard, said Barbie.

RE: Self Driving Uber Fatality - Thread I

JStephen, interesting idea, the "slider", so your factory "A-Hole Crackpot" is still not aggressive enough, then you call in the third parties :)

The problem with sloppy work is that the supply FAR EXCEEDS the demand

RE: Self Driving Uber Fatality - Thread I

Quote (TenPenny)

I see your point, an autonomous vehicle shouldn't be able to predict people crossing in front of it, and should also not be able to notice a concrete abutment in its path, because, apparently, that's hard.

Math is hard, said Barbie.

Well.. that's constructive. Way to contribute.

RE: Self Driving Uber Fatality - Thread I

Quote (jgKRI)

You do realize that if the radar system was able to override autonomous functions and emergency brake every time it detected a stationary object in front of the car, that the system would not function and the car would never move?

Why not leave the simple and reliable Automatic Emergency Braking System intact (separate system), just in case the complicate and unreliable AV system fails to see a pedestrian? Or a concrete barrier? Or a cross-traffic truck? Or a car parked ahead? Or a bright yellow barrier for a closed lane?

I see you folks arguing. But I'm not quite sure what you're arguing about...

RE: Self Driving Uber Fatality - Thread I

(OP)

Quote (VE1BLL)



Why not leave the simple and reliable Automatic Emergency Braking System intact

What happens if I want to steer during an emergency?

--
JHG

RE: Self Driving Uber Fatality - Thread I

Quote (VE1BLL)

Why not leave the simple and reliable Automatic Emergency Braking System intact (separate system), just in case the complicate and unreliable AV system fails to see a pedestrian? Or a concrete barrier? Or a cross-traffic truck? Or a car parked ahead? Or a bright yellow barrier for a closed lane?

The assumption there is that the 'simple' emergency braking system would have prevented both of these accidents.

That is, at best, a stretch.

It also means redundancy of a lot of expensive sensors (assuming you mean fully redundant systems) and/or processing capability- which represents a potentially large bottom-line hit, with unknown improvement in safety.

We're all engineers here, we'd be kidding ourselves if we didn't acknowledge that these companies are trying to find the optimized point of safety-per-dollar, not just the safest overall system.

RE: Self Driving Uber Fatality - Thread I

Asking a simple radar to determine whether an object is solid is non-trivial. "car, tree, bike, sign," cannot be recognized by a simple radar, and possibly not even a lidar; most systems that can distinguish between these objects is a vision system, not a radar.

With a 41 ft footprint at 300 ft, the radar sees, at best, a rather huge blob, even at 100 ft, the blob would be about 13 ft across, which is still many traffic lanes, which means that anything alongside the lane would be detected, even if they weren't hazards. One could employ a phased array radar, but that would be significantly more expensive and processor intensive than pretty much anything else in the car. A car going 40 mph requires a minimum of 76 ft stopping distance, plus processor reaction time, so 100-ft range is a must. Only the lidar and the camera array have sufficient resolution to adjudicate objects at such distances.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

jgKRI: Have you ever driven a car with adaptive cruise control or forward collision avoidance systems? They are very capable of over-rididing driver functions and emergency brake when they detect a stationary object; so over-riding an autonomous system should be no different. They handle stationary objects being 'revealed' when cars between the autonomous car and the object with relative ease because it's just a matter of comparing the targets in one frame to the targets in the next frame and identifying what the closing rate is.

In these Level 1 (Driver Assistance) and Level 2 (Partial Automation) types of systems there aren't generally issues, because they supplement the driver who is still actively engaged in driving the vehicle. Telsa's shortcoming is that they took a Level 2 type system and branded it as "autopilot" instead of a safety or supplemental system. "The car can drive itself." Which gives the drivers a false sense of security. And leads to them driving into "overhead" signs that are really at eye-level and wind up decapitating them. There are 5 classifications of vehicle autonomy. Tesla accomplished the second of five. And this is called Autopilot? They should have a class-action lawsuit levied against them for false advertising alone.

Levels 3 and 4 are Conditional Automation and High Automation respectively.


Keeping a car between the lines is easy (mostly). In the incident related to this thread, Uber is beta testing (and that is generous) a Level 5 system (full automation). They have no safeguards in place (which would be simple [eye tracking, steering inputs) to ensure that the "backup" driver(s) is paying attention. Level 5 cars need to be able to discern whether a car changed lanes nearby, a plastic bag blew across the road, a bird crossed in front of the car, a squirrel paused in the divider, a leaf blew around in the air, a pedestrian approached the road, etc. AKA all the things a human driver does 95+% of the time they are operating the vehicle. They certainly need to be able to identify a bicycle moving across 40'+ of clear pavement into their path and brake and/or swerve into something they have identified as a safe place to move into. Sometimes they will have to decide who dies and who lives. They will certainly have to know that that overhead sign is only 3' above the ground and closing fast.

https://www.caranddriver.com/features/path-to-auto...

Quote:

Because no two automated-driving technologies are exactly alike, SAE International’s standard J3016 defines six levels of automation for automakers, suppliers, and policymakers to use to classify a system’s sophistication. The pivotal change occurs between Levels 2 and 3, when responsibility for monitoring the driving environment shifts from the driver to the system.

Level 0 _ No Automation
System capability: None. • Driver involvement: The human at the wheel steers, brakes, accelerates, and negotiates traffic. • Examples: A 1967 Porsche 911, a 2018 Kia Rio.

Level 1 _ Driver Assistance
System capability: Under certain conditions, the car controls either the steering or the vehicle speed, but not both simultaneously. • Driver involvement: The driver performs all other aspects of driving and has full responsibility for monitoring the road and taking over if the assistance system fails to act appropriately. • Example: Adaptive cruise control.

Level 2 _ Partial Automation
System capability: The car can steer, accelerate, and brake in certain circumstances. • Driver involvement: Tactical maneuvers such as responding to traffic signals or changing lanes largely fall to the driver, as does scanning for hazards. The driver may have to keep a hand on the wheel as a proxy for paying attention. • Examples: Audi Traffic Jam Assist, Cadillac Super Cruise, Mercedes-Benz Driver Assistance Systems, Tesla Autopilot, Volvo Pilot Assist.

Level 3 _ Conditional Automation
System capability: In the right conditions, the car can manage most aspects of driving, including monitoring the environment. The system prompts the driver to intervene when it encounters a scenario it can’t navigate. • Driver involvement: The driver must be available to take over at any time. • Example: Audi Traffic Jam Pilot.

Level 4 _ High Automation
System capability: The car can operate without human input or oversight but only under select conditions defined by factors such as road type or geographic area. • Driver involvement: In a shared car restricted to a defined area, there may not be any. But in a privately owned Level 4 car, the driver might manage all driving duties on surface streets then become a passenger as the car enters a highway. • Example: Google’s now-defunct Firefly pod-car prototype, which had neither pedals nor a steering wheel and was restricted to a top speed of 25 mph.

Level 5 _ Full Automation
System capability: The driverless car can operate on any road and in any conditions a human driver could negotiate. • Driver involvement: Entering a destination. • Example: None yet, but Waymo—formerly Google’s driverless-car project—is now using a fleet of 600 Chrysler Pacifica hybrids to develop its Level 5 tech for production

RE: Self Driving Uber Fatality - Thread I

"This is exactly what I'm saying. You do realize that if the radar system was able to override autonomous functions and emergency brake every time it detected a stationary object in front of the car, that the system would not function and the car would never move?"

Bullshit or every car with a functioning AEBS would never move.

When do you stop predicting what the barrier might do and actually apply the brakes, or at least start slowing down? 200' away, 150' away, 100' away, 50' away or just never because it's predicted to move out of the path?

How is the radar tuned exactly? What does Musk means by "looks like an overhead sign"? The software is deciding the data from a certain area represents an overhead sign and because it's a sign it can be ignored.


"2 accidents of this type from a dataset of millions of miles driven does not indicate a lack of basic functionality."

If you want to tout how safe it is then the important statistic should be how many times did the human have to intervene to avoid an accident.

And other bad accidents shouldn't be counted just because no-one died?

RE: Self Driving Uber Fatality - Thread I

There are places in town where the homeless will walkout in traffic with no warning, or crosswalk. We drivers are, for some reason, expected to stop.
Most drivers know that and slow down in that area.

I don't believe GPS maps advise drivers to slow down, and maybe they should.

And to add to a point that someone made, in construction zones it is typical for the cones to direct traffic to cross several lines. And in a few cases there are lines that apply only if you are turning.

So what I hear is that the AI is only smart enough for the boring driving, but not smart enough for special cases. So I as a car owner, would know in what way that the AI is confused?

What of other special cases? Lanes marked by overhead X's or arrowx? School lights? High winds? smoke or grass fire? Tumbleweeds?

RE: Self Driving Uber Fatality - Thread I

Quote (Spartan5)

jgKRI: Have you ever driven a car with adaptive cruise control or forward collision avoidance systems?

Every day. I own two of them.

On the Honda the adaptive cruise gets turned off frequently- it gets really pissed when trying to follow a target around a curve that's below a certain radius. These systems aren't flawless.

Quote (Spartan5)

They are very capable of over-rididing driver functions and emergency brake when they detect a stationary object; so over-riding an autonomous system should be no different. They handle stationary objects being 'revealed' when cars between the autonomous car and the object with relative ease because it's just a matter of comparing the targets in one frame to the targets in the next frame and identifying what the closing rate is.

I think the disconnect here is that you are interpreting my stance as "the task being asked of this system is impossible".

I am most certainly not saying that. What I'm saying is that these systems have, and will always have, a nonzero failure rate. Only the catastrophic failures make news; you don't hear a story on CNN every time a Volvo plows into the back of something, so long as no one dies- and this occurrence is more common than you might think. Google it if you don't believe me- but stationary object revealed in front of automated vehicles is a legitimate situation that causes problems. Seriously, google it. These systems fail every day. But taken as a whole (i.e. the number of failures against the number of opportunities for failures to happen) the rates are extremely low. But still nonzero.

That's what I'm saying.

Quote (LionelHutz)

Bullshit or every car with a functioning AEBS would never move.

You're missing the point again. It's not about detection- its about tuning. It's a software programming problem much more than a hardware problem. If every stationary object detected by AEBS caused the car to stop, the car would never move because there is always a stationary object nearby. What makes the system function is the software which determines which objects are stop-worthy and which aren't.

When the system now has to handle other tasks, such as route planning, that involve the same functions under AEBS control, the system has a couple dozen additional degrees of freedom and the level of complication goes up by a couple orders of magnitude. I honestly don't understand how you don't seem to understand that.

Quote (LionelHutz)

When do you stop predicting what the barrier might do and actually apply the brakes, or at least start slowing down? 200' away, 150' away, 100' away, 50' away or just never because it's predicted to move out of the path?

If you find an answer to this question which has a zero failure rate over tens of millions of accumulated system miles driven, you should start your own company building autonomous vehicles.

Quote (LionelHutz)

How is the radar tuned exactly?

Really? The processor takes the radar system's output and decides what to ignore and what not to ignore. Software.

Quote (LionelHutz)

What does Musk means by "looks like an overhead sign"?

He means that early on in testing they identified that overhead signs created radar returns which caused unnecessary braking events, so they tuned the system to ignore inputs of that type.

Quote (LionelHutz)

The software is deciding the data from a certain area represents an overhead sign and because it's a sign it can be ignored.

No, it isn't. If you were able to look at the internal logic tree of the software, there isn't a variable called 'overhead sign' that gets set to yes or no. There isn't a table of classes of objects. There's a bunch of equations relating to object apparent sizes and trajectories, and a bunch more equations dictating what radar return characteristics constitute threats and what returns don't.

This isn't how robot vision software works- if you've never written any, I don't know how else to explain it to you. But classifying objects would require 1) a complete set of all objects which could ever be encountered by the system (which is impossible, and would be dangerous to attempt) and 2) would require that classification step to be completed reliably before any processing or path prediction could take place. If the system was designed this way it would be waaaaaaaaaaay to slow to be safe.

Quote (LionelHutz)

If you want to tout how safe it is then the important statistic should be how many times did the human have to intervene to avoid an accident.

If you think I'm 'touting' anything, you're either not reading my posts, or you've decided what I mean before reading my posts.

I think this technology is many years from being ready to deliver what the developers are trying to sell today. But I also believe in interpreting the data that is actually available, not making guesses about things I don't know. And I don't know how many times the human has had to intervene to avoid an accident. I bet Uber does.

Quote (LionelHutz)

And other bad accidents shouldn't be counted just because no-one died?

That's not what I said. What I said was that the accident didn't make news because no one died. So there's been a lot less focus on that particular incident.

RE: Self Driving Uber Fatality - Thread I

My impression was that AEBS technology seems to be working relatively well. They're being deployed on many brands of cars. They're certainly not making the news. My resultant assumption is that they're a relatively mature technology.

"...would have prevented both of these accidents."

Both? I had listed five examples above. Videos on YouTube, or just browse and you'll probably find many others I've not seen.

"...a stretch."

Well, that's what AEBS is designed to do. So it's my assumption (see rationale above) that it does what it's supposed to do almost all of the time. Hardly seems like "a stretch", unless you mean that it might occasionally fail, then sure, agreed. Being a safety intervention system, AEBS doesn't have to be 99.9999999% accurate. 98% is probably fine.

I have no idea how you arrived at it being "a stretch".

RE: Self Driving Uber Fatality - Thread I

Quote (jgKRI\)

I think the disconnect here is that you are interpreting my stance as "the task being asked of this system is impossible".

I am most certainly not saying that. What I'm saying is that these systems have, and will always have, a nonzero failure rate. Only the catastrophic failures make news; you don't hear a story on CNN every time a Volvo plows into the back of something, so long as no one dies- and this occurrence is more common than you might think. Google it if you don't believe me- but stationary object revealed in front of automated vehicles is a legitimate situation that causes problems. Seriously, google it. These systems fail every day. But taken as a whole (i.e. the number of failures against the number of opportunities for failures to happen) the rates are extremely low. But still nonzero.
I didn't misinterpret you. I quoted you verbatim and pointed out that this wasn't the case.

I'm not suggesting there will ever be a non-zero incident rate. But don't suggest that these are newsworthy because they are catastrophic. They are newsworthy because they should have been within the capabilities of these systems. Tesla's "autopilot" shouldn't autopilot cars into stationary, immovable objects; or the sides of semi-trucks that can be seen for hundreds of feet. Unless it really isn't an autopilot at all, but a supplemental system. They should be called on that and taken to task for misrepresenting what it is and/or take a more aggressive stance in ensuring that the drivers are using it like the Level 2 system that it really is.

And Uber shouldn't be beta-testing a fully automated Level 4/5 car on public streets without, at a minimum, proper safeguards in place to enure the "backup" driver(s) is an active participant in the endeavor. That's what's newsworthy about this.

RE: Self Driving Uber Fatality - Thread I

Quote (VE1BLL)

unless you mean that it might occasionally fail, then sure, agreed.

This is exactly what I've been saying this entire time. That the failure rate is small but nonzero. That's it.

RE: Self Driving Uber Fatality - Thread I

and from the NTSB:

"In each of our investigations involving a Tesla vehicle, Tesla has been extremely co-operative on assisting with the vehicle data.

"However, the NTSB is unhappy with the release of investigative information by Tesla."

I wonder who made them the guardians of the 'truth'. Potential for a dangerous overstep. I'm trying to find an article where Musk summarised accident information.

Dik

RE: Self Driving Uber Fatality - Thread I

"What happens if I want to steer during an emergency?"

I presume that the ABS is not overridden, so some steering control may still be present. Assuming that your face isn't being torn off by the negative g force *.

The AEBS that I've seen wait until the last possible time to brake. Probably too late to steer around anything.

Some of the earlier models didn't even bother to prevent the accident. Merely reduce the impact. Covered in some detail on the Fifth Gear TV program.

* Once upon a time, I triggered off the Mercedes Brake Assist System (BAS) by moving my foot between the accelerator and brake pedals far too quickly (for a reason). Oh. My. Gawd. The pedal was pulled down under my foot (felt soft), my head was yanked forward, the seatbelt tightened (motorized tighteners), and all I could see was the speedometer unwinding like a tach. The ABS worked in parallel; wheels were not quite locked up. Once the speedo reached a satisfactory value (maybe two seconds), I released some foot pressure, the BAS disengaged, the brake pedal popped back up, and I drove around the corner at a reasonable speed.

RE: Self Driving Uber Fatality - Thread I

"That the failure rate is small but nonzero. That's it."

Okay.

Safety systems can get away with 98% or 99% or 99.9%. And that's probably fine, both ethically and legally. Sort-of like the 'Good Samaritan' concept. Make a good effort, and occasional failures to prevent harm would not be unacceptable.

AV systems needs to be 99.9999...% otherwise they'll be involved in an endless string of inexplicable accidents. The ethical considerations are much different. Merely improving on the human driver accident rates in itself is insufficient, due (for example) to liability concentration. Widespread deployment of imperfect AV would also require legislation to sort out the liability, for the great good (if so decided).

The distinction between these two ethical polarities (preventing vice causing harm) is not as clear cut as I've described. It's unfortunately more muddled. But there is this important distinction.

RE: Self Driving Uber Fatality - Thread I

Quote (Spartan5)

I didn't misinterpret you. I quoted you verbatim and pointed out that this wasn't the case.

You most certainly didn't quote me verbatim. There's no quote of mine, in a post of yours, that I can find in the lower half of this thread.

Quote (Spartan5)

I'm not suggesting there will ever be a non-zero incident rate. But don't suggest that these are newsworthy because they are catastrophic. They are newsworthy because they should have been within the capabilities of these systems.

If you're comfortable with these systems having a nonzero rate of failure... exactly what type of failures, then, are not newsworthy?

Saying that they are newsworthy because people were killed isn't conjecture.. it's a statement of fact. In this very thread VE1BLL has come up with several other instances, and I have contributed one as well, where these systems failed. The ones where no one died got a lot less coverage. That's an indictment of our society much more than it is an indictment of anything related to the topic we are discussing here.

Quote (Spartan5)

Tesla's "autopilot" shouldn't autopilot cars into stationary, immovable objects; or the sides of semi-trucks that can be seen for hundreds of feet. Unless it really isn't an autopilot at all, but a supplemental system.

Once again.. if the failure rate being nonzero is ok, what failures are you willing to accept?

Quote (Spartan5)

They should be called on that and taken to task for misrepresenting what it is and/or take a more aggressive stance in ensuring that the drivers are using it like the Level 2 system that it really is.

I couldn't agree more on this point. The tech isn't ready. I've been making that argument too, in this thread I've said it at least twice. I'm not in here advocating for Tesla or Uber or anyone else.

RE: Self Driving Uber Fatality - Thread I

Quote (VE1BLL)

AV systems needs to be 99.9999...% otherwise they'll be involved in an endless string of inexplicable accidents. The ethical considerations are much different. Merely improving on the human driver accident rates in itself is insufficient, due (for example) to liability concentration.

I couldn't agree more- although I think your percentage for success needs about 20 more 9s before the general public will be ok with things.

Quote (VE1BLL)

Widespread deployment of imperfect AV would also require legislation to sort out the liability, for the great good (if so decided).

This is already happening- and from my chair it's a major problem that probably won't get solved until way too late.

RE: Self Driving Uber Fatality - Thread I

"He means that early on in testing they identified that overhead signs created radar returns which caused unnecessary braking events, so they tuned the system to ignore inputs of that type."

"No, it isn't. If you were able to look at the internal logic tree of the software, there isn't a variable called 'overhead sign' that gets set to yes or no. There isn't a table of classes of objects. There's a bunch of equations relating to object apparent sizes and trajectories, and a bunch more equations dictating what radar return characteristics constitute threats and what returns don't."

Contradict yourself much?

I NEVER posted anything about these tables you're trying to push on me. Since these equations represent if the object is a threat or not, I can rightly say the objects are classified by the equations. Maybe not as the exact object, but as an object that is one to be concerned about, or not. But, I bet it goes much further than just an object that could be a threat and gets into classifying those objects into further subcategories depending on the level of thread and the characteristics that type of object exhibit. By all intents and purposes, it detects certain objects as overhead "things", one of the possibilities being an overhead sign. And at this point, I really could care less about arguing these stupid semantics with you any further.

How do those equations get created exactly? Do they just appear out of thin air?

Tesla said they first recorded a whole bunch of sensor data from the cars on the road (for something like a year or more worth) with the hardware before they rolled out any part of their autopilot software to the public. What do you think that data was used for? It was used to help build those equations you're going on about.

RE: Self Driving Uber Fatality - Thread I

They're based on a LOT of data characterization, as you're stating.

Not really sure what the argument is that you're trying to make at this point.

RE: Self Driving Uber Fatality - Thread I

"Not really sure what the argument is that you're trying to make at this point."

Same here.

RE: Self Driving Uber Fatality - Thread I

Quote (LionelHutz)

Contradict yourself much?

No.

You're seeing semantics where discrete points exist.

Google fourier transform + image processing. If you like the math it is actually a pretty interesting topic.

That is how the algorithms used in these systems are created and modified over time.

A computer does not process or recognize images in the same way that your brain does. Until you understand that, you'll continue to not understand why this stuff is not as easy as we would all like for it to be.

RE: Self Driving Uber Fatality - Thread I

Meanwhile, every other industry (Aircraft, Ships, Submarines) that has had autopilots for a long time is shaking their heads in disbelief at the idea that you should only call something an autopilot if it is clever enough to stop you bumping in to inconvenient obstacles.

Bumping into inconvenient obstacles is the one thing that autopilots have historically been good at.

A.

RE: Self Driving Uber Fatality - Thread I

I was a little surprised to find that the predictions for the paths of targets are done using Kalman filtering. This seems to me to be optimistic. A pedestrian can sidestep, for example. Once a target is classified, its future position should be an expanding cloud of all possible trajectories. Humans can accelerate at about 1g.

This is irrelevant in the Uber case, since they were on a collision course her relative bearing remained constant and a Kalman prediction would be fine.

Back on the LiDAR and reflectivity. If her clothing was dark wool it might have a low reflectivity like asphalt. If they filtered out all returns with a reflectivity of about the same as asphalt as a first step to eliminate irrelevant data, then she might have been edited out of the LiDAR picture.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?

RE: Self Driving Uber Fatality - Thread I

Quote (GregLocock)

I was a little surprised to find that the predictions for the paths of targets are done using Kalman filtering. This seems to me to be optimistic. A pedestrian can sidestep, for example. Once a target is classified, its future position should be an expanding cloud of all possible trajectories. Humans can accelerate at about 1g.

If not Kalman, which way do you go- do you add another layer of complexity and go into Bayes? If you do that, you wind up having to estimate a lot of non-linearties and you also add the problem of differentiating non-lineary from detection system noise. Either way you basically have to use direct methods, based on current processor speeds and the processing time of images from the visual part of sensor packages. I think once the hardware is capable, the change to indirect methods of path optimization will yield some gains in efficiency and safety. But that's a ways off.

Quote (zeusfaber)

Meanwhile, every other industry (Aircraft, Ships, Submarines) that has had autopilots for a long time is shaking their heads in disbelief at the idea that you should only call something an autopilot if it is clever enough to stop you bumping in to inconvenient obstacles.

With all due respect... Autopilots for planes/ships/submarines have to deal with much lower quantities of obstacles, approaching over much larger time scales, and following much more predictable paths. It doesn't perplex me one bit that we had aircraft autopilot figured out decades ago but don't have cars figured out yet.

RE: Self Driving Uber Fatality - Thread I

Greg

Lots of what is being claimed and done is based on optimism.

As for what happened, I'm highly doubtful we will ever know what really happened. The NTSB report from the Tesla death didn't get into reasons why the autopilot failed to react. So, I'm doubtful details on what happened within the Uber system during this accident will be investigated or reported. And, it's not like Uber will tell unless something is leaked or whistle blown.

RE: Self Driving Uber Fatality - Thread I

Quote (zeusfaber)

Meanwhile, every other industry (Aircraft, Ships, Submarines) that has had autopilots for a long time is shaking their heads in disbelief at the idea that you should only call something an autopilot if it is clever enough to stop you bumping in to inconvenient obstacles.

And yet, no one, to date, has brought to market an obstacle avoidance system (OASYS) for aircraft, and THAT concept has been around for nearly 30 years for helicopters. Helicopters fly high, most of the time, specifically to avoid dealing with obstacle avoidance. NVESD paid for and flew an OASYS in 1994 on a AH-1 Cobra. That system only provided visual and aural warnings to the pilot. Aside from getting a decent ROC curve, cost was a serious problem, even though the platforms being protected are substantially more expensive than a typical car, even one that's tricked out with a boatload of sensors. But, much of the cost had to do with getting sufficient processing power, and developing sufficient algorithms.

The data is meaningless without the algorithms, and even sensors with algorithms need navigation information, as well as a rock-solid moving map representation of the local universe, including what to do with data, over time, and what to do with conflicting data. Conflicting data invariably arise, and having the vehicle stop or turn at every single glitch of data will be just as bad as missing real obstacles. This is why the ROC curve is so important. Not only do you have to have a gazillion 9s of detecting real targets, you need a gazillion inverse 9s for the false alarm rate. For typical driving, a false alarm rate on the order of 1 per week or less would be needed. That precludes any sensor that doesn't do a serious amount of processing on the data.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

"And yet, no one, to date, has brought to market an obstacle avoidance system (OASYS) for aircraft..."

I was wondering about that when zeusfaber mentioned autopilots for aircraft. The typical autopilot feature on aircraft has more in common with an automobile cruise control than anything else, does it not? I believe the response from Tesla about the collision with the truck was that their "autopilot" was not meant to be a self-driving system, but a driver assistance feature.

RE: Self Driving Uber Fatality - Thread I

Quote (jgKRI)

Once again.. if the failure rate being nonzero is ok, what failures are you willing to accept?

As long as the rate of failure is significantly better than what is existing... How much better, is a question. 10% better is still 'better'... The start up is likely fraught with problems and the software/hardware will require numerous revisions. There is a potential for much better road safety than what is existing, and 'driverless' vehicles will eventually be 'talking' to each other and providing each other with a 'drive plan' so positions can be anticipated.

Dik

RE: Self Driving Uber Fatality - Thread I

Quote (dik)

There is a potential for much better road safety than what is existing, and 'driverless' vehicles will eventually be 'talking' to each other and providing each other with a 'drive plan' so positions can be anticipated.

I certainly believe it's true that better/safer systems are in our future- where this argument (not that you invented it- it's a common one) loses traction for me is exactly such cases as we're discussing here.

From what I can tell, accidents due to an autonomous vehicle colliding with another vehicle are the minority; the majority of accidents with autonomous vehicles, at least the ones that get press, are collisions between autonomous vehicles and objects or pedestrians doing abnormal things or being in abnormal places.

Point is.... those types of problems are not solved by cars talking to each other. So how do they get solved?

Ultimately, better detection and processing are needed and that's a steep hill to climb.

RE: Self Driving Uber Fatality - Thread I

(OP)
GregLocock,

My old employer manufactured laser rangefinders that had to detect the bottom of potash holes. I recall that the reflectivity was something like 20%, and that the rangefinders were rated to do this at 160m.

From a LiDAR poiunt of view, bicycles are transparent. You get a return from the bicycle, and you get a return from the stuff behind the bicycle. We made LiDARs that had first and last pulse logic. From an aircraft, the first return comes from the top of the trees, and the last comes from the ground. What is the difference between a woman and bicycle, and a tumbleweed?

--
JHG

RE: Self Driving Uber Fatality - Thread I

The woman wasn't a tumbleweed, and certainly was solid enough that any AI should have considered her as a possible collision object, particularly the closer the car got to her. Moreover, Uber has contaminated our perception of the collision with the crappy video they allowed the police to release. The actual video data would have likewise seen the woman and her bicycle, and the camera and lidar data fused together certainly should have set off warning bells in the software. The lack of any reaction is really disturbing. And let's not forget that the car ostensibly also had a radar, which should have gotten decent returns from the bicycle, particularly at the short ranges just prior to impact.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm

RE: Self Driving Uber Fatality - Thread I

Quote (JStephen)

But when these autonomous systems don't sense something, they just plow into it at full speed, which is not too pretty.
For all of the discussion (arguing?) here, I come back to one simple thought... the car didn't try to brake at all before hitting a sensor-solid object bigger than a breadbox.

All of this discussion about object classification is irrelevant to the above fact. It doesn't matter WHAT the object in front of the car was classified as, because it was in front of the car for an extended period of time (>1 second, plenty of time for practically every sensor to recognize it as existing).

I could likely ask a high school programmer to figure out the logic of applying brakes when "Object directly in front of car is closing in at same speed car is moving", i.e., "I'm approaching a stationary object directly in my path".

Anywhere within that 1+ second interval the car could have at least shed some speed, but it didn't. It went headlong into the object.

I have to make certain assumptions, though they may be incorrect. First and foremost, the software recognizes when sensor data isn't being properly received... missing data from an unplugged cable, high incidence of out-of-range data (i.e., the sensor itself is going bad), etc. Second, the Kalman filter is set up correctly... the wrong coefficients here can easily lead to incorrect predictions, particularly if those predictions are based upon "tamed" data (i.e., the system has not been tested with impulse data, like someone stepping out from behind a tree and jumping into the travel lane). If the first issue has not been handled correctly (bad/corrupted sensor data), then the output of the filter would also be highly suspect.

This is where I'm leaning... a mixture of bad sensor data and poorly trained prediction filters led to the car initially not having enough valid data to operate on (or it had enough valid data that was too heavily tainted with bad data), and once the well had been poisoned, the "reaction" algorithms couldn't make a proper decision. It's possible, given valid/untainted data and a few more seconds to move that data through the Kalman filter, a valid decision would have been arrived at. I would really love to see the dataset it was working off of about 30 seconds before up to 10 seconds after the accident.

Dan - Owner
http://www.Hi-TecDesigns.com

RE: Self Driving Uber Fatality - Thread I

"All of this discussion about object classification is irrelevant to the above fact. It doesn't matter WHAT the object in front of the car was classified as, because it was in front of the car for an extended period of time (>1 second, plenty of time for practically every sensor to recognize it as existing)."

When the system is only reacting to objects that might pose a collision threat but has classified the object in front of the car as an object that it believes could be ignored then it will blissfully ignore it.

Oh wait, I'd better say that when the processor decides the data from the object is classified as data to be ignored, then it ignores that data and for all intents and purposes is blind to the object even when it is in the path of the car, just in case only saying object "offends" someone yet again.

If you didn't notice, me pointing out that the car should have some type of basic backup emergency braking function started a whole bunch of the stupid mess you see above.

Red Flag This Post

Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.

Red Flag Submitted

Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.

Reply To This Thread

Posting in the Eng-Tips forums is a member-only feature.

Click Here to join Eng-Tips and talk with other members! Already a Member? Login


Resources

White Paper - Considerations for choosing a 3D printing technology
The adoption of 3D printing into major companies’ product development life cycles is a testament to the technology’s incredible benefits to consumers, designers, engineers and manufacturers. While traditional production methods have limitations in manufacturability, 3D printing provides unparalleled design freedom due to the additive method of building parts layer by layer. Download Now

Close Box

Join Eng-Tips® Today!

Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.

Here's Why Members Love Eng-Tips Forums:

Register now while it's still free!

Already a member? Close this window and log in.

Join Us             Close