Self Driving Uber Fatality - Thread II
Self Driving Uber Fatality - Thread II
(OP)
Continued from thread815-436809: Self Driving Uber Fatality - Thread I
Please read the discussion in Thread I prior to posting in this Thread II. Thank you.
--
JHG
Please read the discussion in Thread I prior to posting in this Thread II. Thank you.
--
JHG
RE: Self Driving Uber Fatality - Thread II
- The vehicle moves along the road at some maximum safe velocity, defined by the conditions that follow.
- If the vehicle sees a hazard, it must have the option of decelerating to a complete stop at 1/2G.
- There are various moving hazards. A hazard 0.6m (2ft) tall by 0.3m (1ft) across simulates a small child, who is capable of running at 2m/s. An alternate target could be bigger and faster.
- The LiDAR must have complete coverage, i.e., there cannot be gaps between the laser spots.
- The laser spots must be small enough to resolve the target. It does not have to identify Billy Smith of 123 Any Street, but it must be able to see that there is an object
- The scan rate and the resolution must be enough that the object cannot move more than 50% out of the position in which it was spotted during the previous scan. This gives the AI a chance to recognize that these are the same objects.
- The target may not be running in a straight line. I think a straight line across at full speed is the biggest problem, but I am not sure.
- The time in which the laser fires and the receiver captures the signal, must be less than the laser pulse period. We don't want multiple signals from the same LiDAR.
- We need some AI solution for recognizing and rejecting laser spots from the vehicle next to the robot.
- Almost all of the problems with speed and resolution are in front of the vehicle. If the vehicle has two LiDARs, one can watch forward with a 40°FOV, and the other can scan more slowly at lower resolution at 360°. On the highway, scary things behind you are big and close. If you are backing up, you are doing it at low speed.
Note how this imposes limits on the speed of the vehicle, as well as on the field of view of the laser and receiver.This deceleration is slow enough that a vehicle behind will be able to equal this and not rear-end the robot.
--
JHG
RE: Self Driving Uber Fatality - Thread II
When you add stationary objects, it gets more complicated. Those objects have also moved relative to the LIDAR array. Have they moved relative to the car's predicted path?
What happens when a detected object (such as the small child) moves behind a stationary object (phone booth, mailbox, whatever) and is missing from the data set for a few frames? How does the system handle motions which deviate from what it 'predicts'? That's where things get difficult.
RE: Self Driving Uber Fatality - Thread II
It was a fun project... I later did my first attack resistant reception for this same firm.
Dik
RE: Self Driving Uber Fatality - Thread II
What image processor were you using, or did you write one yourself? How did you handle filtering of the altered POV once the images were converted to the frequency domain (assuming that's the method used...)?
RE: Self Driving Uber Fatality - Thread II
Dik
RE: Self Driving Uber Fatality - Thread II
This was solved in the helicopter OASYS, by placing each detected object into a virtual world within which the vehicle moved. Each object is timetagged for a certain level of persistence, so that if they are moving into your path, you potentially have the option to stop or move into space they vacated.
Note that for a typical car, a 2-ft tall obstacle is way gigantic. Anything taller than about 6 inches is already problematic. Just consider what would happen if you hit a curb at 40 mph; there's the potential of serious breakage of your own car, and the possibility (if only one wheel hits) of being diverted into adjacent or opposing lanes. There's a YouTube channel of a low underpass coupled with a curve in the curb where vehicles miss the curve and the cars either die of broken axles, or get propelled out of their own lanes.
People tend to think that it's relatively easy to create an AI to drive because people can drive almost without thinking about it. But, AIs have yet to be fully capable of even just analyzing images alone, and even after 4 decades of image recognition research, the AIs are barely able to a tolerable job, and there are images that a child could figure out that an AI can't. The human brain, in addition to parallel processing and fusing all its sensory inputs, has a massive associative memory capability that can pull up context and history that aid us in detecting and classifying threats.
Likewise, consider how easy it is for us to walk, and how hard it is for robots to do the same.
For an AI car, there are currently only 4 sensors types that can do much of anything, sonar, camera, lidar, and radar. Much hope has been placed on radar, but the reality is that wavelength of radar is so large, even at 95 GHz, that a phased array is possibly the only solution to get sufficient resolution. If we consider the 6-inch curb at the nominal stopping distance for 40 mph, we need a 46-in wide antenna at 95 GHz to fully resolve that curb at 76 ft. Any aperture smaller than that will result in blobs and unresolved objects. Sonar is limited to around 30 ft. Cameras could do 3D, but require either multiple cameras with massive image processing, and are limited by lighting. Lidar is its own light, and can easily achieve 6.6 mrad resolution needed to detect at 6-in curb at 76 ft.
Every single detection is a potential threat at some point in time or path, so a system architect for an AI needs to consider the same things that were considered for the helicopter OASYS, namely, context, history, persistence, etc. of all detected objects. Google doesn't even do that on Google maps, and it gets easily confused in freeway interchanges, where it suddenly thinks that you are on a different part of the interchange, precisely because it throws away all history about you the instant it gets a new GPS position. I suspect that most car AIs have similar behavior and similar lack of awareness of where it was, what was around it a second ago, etc.
That comes from poor systems engineering, and "backup sensing" is simply applying a poor patch on a poor design. If you need to do that, the design should be ripped up and restarted.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
I am playing with a spreadsheet here, and I need to check my numbers and my logic. If my vehicle is doing 80kph (50mph), my 1/2G stop takes 50m (170ft). My laser can run at 1.5MHz, I can scan at 10Hz, and my spot size at 50m is 270mm (11in). A six inch object will cause some sort of LiDAR return, but it will be a weak one. I think a six inch curb across the road will have a distinct signature.
--
JHG
RE: Self Driving Uber Fatality - Thread II
https://www.caranddriver.com/features/path-to-auto...
The question was asked of me in the previous thread, "If you're comfortable with these systems having a nonzero rate of failure... exactly what type of failures, then, are not newsworthy?" And the answer to that question depends on what level of automation the vehicle is capable of; as well as what the automaker is MARKETING/ADVERTISING/SUGGESTING what it is capable of.
In both of Tesla's noteworthy accidents, it's not so much a failure of the system (it's only Level 2) as it is the operation of the system. And Tesla bears direct responsibility for that in my opinion. Note the names of the other Level 2 systems: Audi Traffic Jam Assist, Cadillac Super Cruise, Mercedes-Benz Driver Assistance Systems, Volvo Pilot Assist. See how ASSIST is prominently featured? The exception is Super Cruise. Which isnote worthy because it won't change lanes (the driver still has to do that), because it is geofenced (only will operate on limited access HD mapped roads [no intersections allowed]), and has robust, ACTIVE driver attention monitoring systems. Any two of those three would have prevented the Tesla from driving into the side of the semi or the median wall. So there's the unaccepable engineering failure. There's what's newsworthy about them. And that's more so the case when Tesla is ostensibly encouraging their systems to be abused in this fashion touting them as self-driving "autopilot" cars. Musk can launch a car into space, but he can't take simple steps to make his systems safer? C'mon, man. Of course, it would be a lot tougher to blow marketing smoke up everyone's ass if he did that.
In this Uber incident, it is not clear what level of autonomy they are trying to accomplish. Though it can be assumed that since having a driver in the car might be viewed as an expense they would like to eliminate, that they are trying for Level 5. Or maybe level 4. I don't think it is unreasonable to expect a Level 4 or 5 vehicle to be able to manage avoiding the things that most attentative, actively-engaged drivers would. So this incident is noteworthy because the SUV failed miserably and what should be something a fully-automated vehicle could manage. This was not something appearing out of nowhere from behind a blind corner or moving erratically or otherwise obscured. This is further a newsworthy failure because these vehicles are being operated in the public sphere with no (effective) safeguards in place. As long as these vehicles are being "tested" on public roads, they should be equipped with the same driver attention monitoring systems as Cadillac's Level 2 systems. And since they aren't, I'd have no problems with Uber and/or the convicted armed robbery felon backup driver being charged with negligent homicide. Once they are functional, and can deal with bikes crossing the road and kids running behind a mailbox for 0.25 seconds before emerging in from of them, then set them loose.
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
But, not all obstacles are across the road, per se. The YouTube channel's curb is actually on the side of the road, and people routinely fail to see it until their wheels hit the protrusion. There may be dips or rises in the road that obscure the potential obstacle until you're well past the safe stopping distance, or it might be much narrow, such as the wheel of a small car, which might be about 6 inches tall and 14 inches across.
And if you saw it once, but not later, did it move or did it get obscured by something else in the mean time?
I think there's been much confusion about "classification," because in my mind, anything taller than 4 inches is already a potential hazard. I wouldn't architect a system that would manage to forget or ignore such detections.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
I agree it's good to be mindful of exactly what we're talking about.
Not at all. I'm convinced, in fact, that for the general public to be vocally accepting of Class 5 vehicles on a large scale, the rate of injury and fatal accidents will have to be MUCH lower than the human-piloted rates at the time Class 5 operation actually becomes a possibility. From the standpoint of convincing the average midwestern dirt farmer that the shiny new robot car is safe, saying "It's just as safe as the median American driving the median car on the median freeway!!!!" won't be anywhere near enough.
That is a PR problem, not an engineering one. I digress.
As far as uber's ultimate goal for the level of automation attained by their equipment- I don't doubt for a single second that their ultimate goal is Class 5 operation. As you've already identified, this requires on-road testing of vehicles fully equipped for Class 5 operation but operating in a tier one or two or more steps down while gathering data. I agree that letting these systems operate without any real safeguarding is a recipe for disaster.
I wouldn't be terribly surprised if Class 2 or 3 or 4 systems wind up with government-mandated 'attentiveness monitoring' systems. Tesla's method of just having to have your hand on the wheel isn't very robust.
RE: Self Driving Uber Fatality - Thread II
You can't track an "something" until you've determined it's "something" that needs to be tracked. You can call it classification or something else, but determining there is "something" out there that is of concern is the first step.
To make the load on the system easier, the designers certainly are deciding to ignore certain data once it's been deemed unimportant to the task of getting the car safely down the road. That way, they have enough processing power to process the important data.
RE: Self Driving Uber Fatality - Thread II
I don't see those six levels in an emergency. Car accidents happen within a couple of seconds. There is not time for a safety driver to put down a book and scan out the window and over the instruments to figure out what is happening. In my scenario above, you are not much more than two seconds away from killing a small child. Either the car is a full-time robot, or the human is full-time in charge and responsible.
--
JHG
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
That's not been proven or even likely in the collision in question.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
This is absolutely what happens. If the system tried to plot trajectories for every object detected in range to the same level of fidelity, there wouldn't be nearly enough processing power to predict everything.
Objects which are close, large, and moving fast are at the top of the list with regard to fidelity of path estimation 'requested' from the processing stage.
I am convinced that both the Uber and Tesla failures under primary discussion here were the result of failures of the software to correctly prioritize predictions, not failures of the hardware to detect objects.
This is why the deep dive into the meaning of the word 'classification' in the last thread, which I'd rather not revisit here.
RE: Self Driving Uber Fatality - Thread II
That is true not only from the perspective of safe operation, but from a liability standpoint as well. Either the driver is responsible for being continuously aware of potential dangers and reacting to them at a moment's notice, or the car is. The cutoff for me is the point where the car is steering itself, because the driver is no longer engaged in operating the vehicle. Emergency braking, lane departure warnings, etc. are all great driver assistance features, but once the car is doing the driving, it has to be able to do it all as well as a human driver who is capable and attentive. The technology is nowhere close to that now, and I have my doubts about it being there anytime in the near future. I know I may sound like a nut when I say this, but as I said before, an AI that sophisticated poses a greater danger to humanity than a few car wrecks.
RE: Self Driving Uber Fatality - Thread II
If a system can handle a very busy street, then it should have enough processing power that it can pay attention to a single pedestrian on an otherwise deserted street.
You may recall that even the Apollo 11 LM computer in 1969 did an excellent job in prioritizing.
It'd be a design flaw if it was overzealous in ignoring the one and only moving object about to intersect, given that it should have had not much else to do.
This wasn't Shibuya Crossing in Tokyo.
RE: Self Driving Uber Fatality - Thread II
I would disagree... I think you can have a balance of the 'best of both worlds'. With driver assist I can see a driver being lulled into a false sense of security; Under all conditions the driver must be attentive and under control.
Dik
RE: Self Driving Uber Fatality - Thread II
Speaking of nuts, I think feasible AV technology needs a playing field that resembles Disneyland's Autopia. Successfully, with high probability tackling the entire real world of possibilities that a free roaming automobile might encounter, and not excluding nefarious traps and ruses that might be staged in the path of an AV, is a pretty big nut to try to crack. Bigger, probably, than is cost-effective with today's and foreseeable technology. And that brings us back to HotRod10's concern.
"Schiefgehen wird, was schiefgehen kann" - das Murphygesetz
RE: Self Driving Uber Fatality - Thread II
I agree. My post was an explanation of how these systems work, not an explanation of a failure mode that lead to the Uber accident.
I'd agree- and this type of flaw appears to be what all the fuss is about. And rightfully so.
RE: Self Driving Uber Fatality - Thread II
Sure it's not proven, and the cause will likely never will be publicly released unless the NTSB forces it out of Uber.
But, if the system had decided there was something in the data which did indicate an object existed which was important and should be tracked then it would have been tracking said object. If said object was being tracked as travelling into it's path that would have led to the system doing something to try and avoid said object. So, my expectation is that the wrong classification of the data is exactly what happened. My guess would be that the data representing the woman was included as being part of the data representing the background vegetation. That type of data would be filtered out as data that can be ignored since data from background vegetation isn't a concern when the task is to drive a car down a street. Or as I put it way back, the AI probably decided she was a bush and bushes can be safely ignored.
I'd think it's way less likely for the system to have processed the data and determined there was an object in front of the car yet done nothing about said object being in front of the car.
Now, after saying data from background vegetation is likely being ignored, that does create an interesting question of what would happen when the car approached a big tree limb or something else similar in it's path.
RE: Self Driving Uber Fatality - Thread II
CODE --> output
I am getting a much smaller spot size than I thought. This is good. The attached code is Octave, but it is supposed to execute in MathCAD.
--
JHG
RE: Self Driving Uber Fatality - Thread II
> No sane person would assume that a bush that big won't cause damage to the car. Some bushes are designed to hide even more solid things, so again, potential for serious damage to the car
> Bushes don't suddenly appear in the middle of traffic lane, and if they did, they might have safety curbs that could cause damage to the car.
> This bush moved across two traffic lanes in the time the lidar should have been able to detect the target. That potentially implies a bush on a cart, which is a risk for serious damage to the car
> Because of the latency from lidar frame to lidar frame, each frame results in a new detection, so the AI didn't ignore "a" bush, it ignored multiple bushes popping out of the pavement, which implies trapdoors that might cause serious damage to the car.
Conclusion: the AI wanted to collide with the bush because it wanted to hurt itself.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
As to your points. If the car had decided the data indicated there was object in front of it then it would have taken avoidance measures. Or working backwards, the car didn't take any avoidance measures so the most likely conclusion is that the sensor data processing algorithm didn't return a result that indicated an object of concern was in front of the car. As you've pointed out already, the sensor data will always returns data from the objects in front of the car (the road itself being the minimum thing), so the processing does have to determine what is of concern not just that there is something there.
There is no reason to expect the processing decision to classify the data as background vegetation would change over time. If the mistake was made once, it can just as easily be made multiple times.
And to go back to the simplistic, if bushes are only programmed as a thing that can be ignored then they will be ignored. It doesn't matter where the bush is, it is a bush and the AI was told to ignore bushes so it does as it was told and ignores bushes.
And the car is not a person, I'd think that fact was well established by now. It can't do anything that it hasn't be programmed to do, or it hasn't been "taught to do" if you would prefer to say that. It doesn't know that bushes could cause serious damage, or that bushes could be on wagons, or that bushes could have curbs around them if it hasn't be programmed to know those things.
Concludion: You're just posting silly crap to be argumentative.
RE: Self Driving Uber Fatality - Thread II
Who are you directing your message to?
Dik
RE: Self Driving Uber Fatality - Thread II
re: Time of flight: 524e-9 s
Laser Period: 1.00e-6 s
Factor: 1.91 Okay.
"Factor" needs to be based on the maximum range, which is 120 m, so TOF is 801 ns. Nevertheless, there's no interference with the next pulse because the PRF of each laser is only about 20 kHz (1.3 MHz/64). Although the top and bottom blocks fire one pulse each simultaneous, there's no interference because the spots are about half the vertical FOV apart.
The HDL-64E's vertical FOV and IFOV are fixed by the optomechanical design. The horizontal angular resolution is dictated by the frame range, which ranges from 1.55 mrad at 5-Hz to 6.19 mrad at 20-Hz frame rate. But, the horizontal IFOV of the receivers need to accommodate the largest angular resolution, so it's at least 6.2 mrad, which only really affects the noise floor of the receiver. The receiver IFOV might be even larger to accommodate the walking of the laser return due to the scan speed of the laser head, although it's possible that the lasers are aligned to the leading edge of the receiver IFOV, so that at max range, the return will end up on trailing edge of the receiver IFOV.
Note that while the horizontal FOV is seemingly programmable, that does not change the firing rate, so all that happens is that the unit does not transmit data from outside of the programmed FOV. Note also, that the HDL-64E does not process the data at all; calibrations and point-cloud processing is done by a user-supplied external processor.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
I don't think for a second that the processor would ever "ignore" any sizable target, simply because it doesn't ever know enough about what's behind the detected object that's hiding in the shadow of the detected object. To wit, we often see sports teams crashing through sheets of paper, but that's because they have explicit and verified knowledge that there's nothing on the other side of the paper. So, even if the lidar and the object processor detected a large sheet of paper in its path, it cannot ignore it because it can't see behind the paper to the boulder behind it.
If you want a more plausible explanation to believe in, it would be more likely that the processor got confused and placed all the detected objects in the wrong places in its world model. Or, the processor managed to erroneously program the HDL-64E's FOV to not include the front of the vehicle, so that it never received any detections from the lidar at all.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
I assume that at some deeper level for an L4 car an integrated 2d world is assembled from the various sensors, and that this integrated picture is what the AV driver actually uses to decide on whether to brake, steer or accelerate. The other way is to build behaviours up, so there's a braking module, and a lane following/changing module, and a speed control module, all working off different sensors as needed, which may be more 'evolutionary' in approach, that perhaps is how Tesla's system works.
Cheers
Greg Locock
New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?
RE: Self Driving Uber Fatality - Thread II
What the system should be doing and what it is doing are two different things.
Ignoring the data that appears as vegetation on the sides of the road and continuing to look for other objects on the side of the road can both be accomplished as the data is processed.
If the processor had mistakenly shifted it's "view" of the world then the car would not have been properly driving down the center of the lane. Wrongly programming the LIDAR unit doesn't sound more probable then a data processing error.
RE: Self Driving Uber Fatality - Thread II
There's an important distinction to be made here- the hardware will always detect some object in front of the car. But based on how the hardware is calibrated and how the processing is set up, it is possible for the hardware to not detect an object directly in front of the car. As an example- Greg's point about the reflectance threshold of the LIDAR array being set to high, resulting in a low-reflectance object not being passed on to the processor regardless of distance or closing speed.
Depending on exactly how all 3 systems are calibrated, I think it is possible for a 'coffin corner' to exist where certain ambient conditions combined with an object or pedestrian bearing certain characteristics could cause all three systems to fail to correctly determine if the detected object was necessary to track.
I don't think that's what happened here but what I think is, obviously, just conjecture.
RE: Self Driving Uber Fatality - Thread II
There are certain objects which will create data that the system will have difficulty resolving into an object at all- someone (IR I think) already mentioned that a bicycle is potentially invisible to LIDAR depending on distance.
If the processing system receives a frame from the sensor array that contains an area of ambiguous data, it is highly likely for there to be a routine which effectively crops out this portion of the frame (by truncating the output of the frequency domain conversion).
This is a necessary function, so that the system doesn't immediately fail if (when), for example, a rain drop hits the optics and causes a blurry spot on the images being processed.
I'm wondering if the front half of the bike which was visible to the sensor array- front wheel/tire, front half of the frame, was detected by the system but not resolvable, leading to the system responding by truncating this object out of each frame as it moved. This would, in turn, not cause the system to 'incorrectly process' the detection data for the bicycle- it would cause the system to not even try.
This failure mode, if realistic, is still the result of system design error by humans. The truncating operation is necessary, but the conditions which cause this truncation, and the width of the window around the ambiguous data to be truncated, are determined by the programmer.
RE: Self Driving Uber Fatality - Thread II
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
Unless the bicycle created some ambiguity which in effect 'confused' the processor, and caused the woman to be truncated out because of a wide error clearing window.
Not highly probable, admittedly- but not impossible.
RE: Self Driving Uber Fatality - Thread II
I am not looking at a specific LiDAR. I am describing a generic one appropriate for a robot car. In my model, at 100kph, the robot must collect enough info on an object 80m away in case it must come to a full halt, without being rear-ended.
A 360° LiDAR will have poor resolution at speed at critical decision distances. You need an additional forward LiDAR with a limited field of view, and high resolution. The scanner FOV is a function of how you design your scanner.
--
JHG
RE: Self Driving Uber Fatality - Thread II
I was the one who pointed out that the LiDAR would see through the bicycle. It would get a relatively weak return from the bicycle, and then it would see the background behind the bicycle. This is not necessarily a bad thing. A bicycle potentially has a unique signature that tells the AI how fast it is capable of moving.
--
JHG
RE: Self Driving Uber Fatality - Thread II
Dik
RE: Self Driving Uber Fatality - Thread II
In the US, we drive WAY faster This morning I got buzzed by a motorcycle doing at least 100 MPH
When we worked on OASYS, we only had a forward-looking lidar with a 50-deg x 25-deg FOV. Turning proved quite scary. If all you ever did was straight line travel, side looking wouldn't come up as an issue. But if you decide to slow down and turn, the new detected objects AND the previously detected objects become significant, particularly with regard to low objects, since they tend to fall into the blind zone of the lidar, which is about 9 ft in radius.
In order to make full use of the pulse rate of the lasers in a reduced FOV, you'd have to give up on the 360 scan, which tends to drive the design to a mirror scanner. However, mirror scanning systems tend to be noticeably larger in volume, and you'd need a minimum of two lidars, one on each side of the car.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
Your LiDAR will impose a maximum speed on your car. If the LiDAR cannot see and identify the hazard, it must be moving slowly enough that it can react when the hazard becomes visible. Fog and curved roads all are an issue.
I figure that a 40° system pointed straight forward, plus a 360° system should work. You would need a way to identify things right next to you. I am focused on three-year old kids in front of you. If you are pulling out onto a highway, the things that will hit you are large enough to be detected by a lower resolution LiDAR.
--
JHG
RE: Self Driving Uber Fatality - Thread II
Yes, which is why the maximum detection range needs to be at least 120 m. OASYS had a 400-m detection range for a nominal 120-kt speed, but the Cobra was way more maneuverable than a car in some situations. Fog and curves are separate issues. Lidar will suck in heavy fog, regardless; the maximum range is severely reduced, although an APD-based design, or a longer wavelength, could possibly get you better performance. Otherwise, radar is king in fog. This is why a single sensor modality is suicidal.
Curves are what they are; if there are obstructions, you'll have to slow down, otherwise, biasing the FOV into the curve is the most plausible approach, just like steerable headlights. That was the plan OASYS as well.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
Dan - Owner
http://www.Hi-TecDesigns.com
RE: Self Driving Uber Fatality - Thread II
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
On the contrary- under certain conditions one must ignore portions of the frame. This is mandatory for any system which uses optics of any kind- whether they be LIDAR, visual range, infrared, whatever.
It is impossible to design a system in which the optics will never, under any circumstances, be fouled. It is completely impossible.
If the optics are fouled, the system must be capable of handling that condition, period.
Trimming frames is absolutely mandatory. Determining exactly what conditions should be allowed to trigger the frame trimming operation is very, very delicate, as frame trimming COULD result in this type of incident under very specific conditions.
As another disclaimer- I do not know with any certainty that this type of image processing error was the root cause of the Uber accident. It is just one of many possibilities.
In any class of autonomous vehicle operating, there will always be some set of conditions in which the vehicle will cease to operate.
Below Class 5, this means detecting anomalies which prevent reliable system operation, and returning control to the driver as immediately as possible.
IF an image processing error due to ambiguous data was the route cause of one of these incidents, than both design teams are on the hook for not building the architecture to return control to the driver under these conditions.
Once again, this is purely speculative.
RE: Self Driving Uber Fatality - Thread II
Certainly, not getting any returns is an indication of sensor misbehavior. Another thing that might be lacking is intensity returns. The Velodyne only outputs range; target intensity could provide useful information for processing targets.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
"Additionally, state-of-the-art signal processing and waveform analysis are employed to provide high accuracy, extended distance sensing and intensity data."
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
https://boygeniusreport.files.wordpress.com/2017/0...
https://electrek.co/2017/03/02/tesla-autopilot-cra...
The solution to these Level 2 autonomy problems seems very straightforward as Cadillac has demonstrated. It's as simple as active-driver monitoring and geofences to only allow system activation in well-mapped and controlled access environments.
Seems pretty stupid for a smart car.
RE: Self Driving Uber Fatality - Thread II
We inevitably hear more about failures than successes. I don't have any stats, but this video shows a few of the times Tesla's have managed to avoid accidents. Tesla Autopilot Predicts Crash Compilation 2
I'm a bit dubious that self driving cars will ever reach Level 5, without changes to road infrastructure to support them. Unfortunately with self-driving cars, they can't be trained in a simulator, they have to be trained in the real world. At the moment, public opinion and authorities appear to favor their development. As the limitations of the software become more apparent, the tide of opinion may turn against experimental software being tested in the field when it has fatal consequences.
RE: Self Driving Uber Fatality - Thread II
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
So it may be that self driving cars may have a different set of bad driving habits depending on the set of programmers.
The trick maybe to know the expected range of bad driving, bad pedestrian habits. That might require a study that people will say is a waste of money, and will not believe some of the results (people don't like to hear bad things they do).
The results might work out that bad habits will be punished in the future as deaths, or increased accidents.
So will the insurance of AI skills be rated on the results of the long time driving tests? Or by the car type and brand?
RE: Self Driving Uber Fatality - Thread II
The Tesla software supports the case where there is a lane marking visible on only one side, the lack of a second lane marking isn't necessarily an error.
In the case of lane markings diverging, how does the software know _which_ lane marking is the "right" one to follow? Either one could be incorrect. Tesla explained that collision avoidance is not signaled to the driver if the driver can safely steer away from it, but that does not appear to be true based on the youtube video.
I believe in the Uber case, the safety drivers do two jobs: safety, and performance monitoring of the software and noting discrepancies for later analysis. Uber user to have two persons per car, they decided that the safety driver could both roles. Obviously, that is cheaper.
There are several problems with partial automation, where the user still has to monitor the computer, such as automation dependence. Inattention is prime cause of road accidents, now we are asking people to pay attention to the computer decisions (or non-decisions), which has already shown to be a human weakness. That doesn't add up.
In particular, the ability of the human to understand in all cases what the computer is doing is very poor. I also follow aviation accidents, and while automation probably saves a lot of accidents (there is no hard data on that), there are many accidents that occur due to the human/computer interface becoming decoupled, when if the pilots had simply turned of the autopilot and flown the plane manually, no accident would have occurred.
It's also worth noting passenger jet autopilots do not do obstacle avoidance, but there are TAWS and TCAS systems which do terrain and traffic detection but they only provide advisory alerts to the pilots.
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
In many other cases, the warning systems engaged, but the pilot was so engrossed with an irrelevant input that the planes crashed. There was one where the copilot and navigator both recognized the stall warning, but were apparently afraid to prompt the pilot, and the plane crashed.
Tesla clearly screwed the pooch by naming it the way they did. Some idiot, probably Musk himself, approved that name. That was a disaster waiting to happen.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
In my child-by-the-side-of-the road scenario, above, the car will decelerate at a half G to a safe halt in 5.66s. If the car does not react and is headed for the kid, impact will be in half that time, 2.83s. If you are sitting in the driver's seat, paying attention, it will take you something like 3/4s to react. Luckily, this provides time for a successful panic stop. If you are not paying attention, it will take you several seconds just to figure out what is happening. If you are quick, you will know what it was you hit, like that lady in the Uber car. Otherwise, you will be wondering what that loud thump was. This is why texting and driving is dangerous.
Aircraft accidents do not happen as fast as road accidents. A safety driver is of no use if they are not holding the control and are 100% paying attention.
--
JHG
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
If we just have most things delivered and we work at home, there would be fewer accidents.
RE: Self Driving Uber Fatality - Thread II
That depends on how smart the robots are.
--
JHG
RE: Self Driving Uber Fatality - Thread II
The problem with sloppy work is that the supply FAR EXCEEDS the demand
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
"Schiefgehen wird, was schiefgehen kann" - das Murphygesetz
RE: Self Driving Uber Fatality - Thread II
Old rule of thumb, "Never attribute to malice that which can be adequately explained by stupidity."
RE: Self Driving Uber Fatality - Thread II
Mike Halloran
Pembroke Pines, FL, USA
RE: Self Driving Uber Fatality - Thread II
Had to google that...which led me down yet another rabbit hole:
http://www.web-cars.com/humor/benchracing.html
RE: Self Driving Uber Fatality - Thread II
https://electrek.co/2018/04/12/tesla-autopilot-cra...
RE: Self Driving Uber Fatality - Thread II
Dik
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
Dik
RE: Self Driving Uber Fatality - Thread II
Perhaps it is just me looking at this incorrectly but I have no need to know anything about the Tesla Autopilot and I certainly do not stay on top of any news release from Mr. Musk about his cars and their features. However, I would like to have safe cars on the road and it is becoming rather obvious that there are issues with the Autopilot.
I think it would be appropriate to require that the CEO is on board his vehicle when it is subjected to independent testing on a testing site specifically designed to test the the self-driving features of that company's auto. What is happening today is running tests of poorly designed software and hardware on public road with ordinary people as guinea pigs. If nothing else, it is unethical.
I think there is an issue with the thought process in the companies that develop these system. They are software companies who appears to have the philosophy that if something is not right, you issue a patch on Tuesday and then all is good. All I can say is that is not how things are done in the chemical engineering world.
RE: Self Driving Uber Fatality - Thread II
I agree that all snake oil salesmen should test their products on themselves.
RE: Self Driving Uber Fatality - Thread II
Musk and Tesla point out that their Autopilot is NOT for autonomous driving. In fact, and in history, the driver needs to be FULLY engaged and PAYING ATTENTION. In every case, the driver was NOT paying attention, and there are those that go out of their way to disable/defeat the car's mechanisms for forcing the driver to stay engaged. The fatality in Mountain View was on a pathological part of the freeway involved in roadwork, and normal drivers were having problems with the lane divergence, and the Tesla did warn the driver, but it's likely that the few seconds of warning was insufficient for the driver to re-engage and figure what they needed to do, or they were so distracted that they didn't even pay attention to the warnings.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
Expecting a driver who isn't driving to "be FULLY engaged and PAYING ATTENTION" is far too much to ask. One of main causes of wrecks now is drivers who are supposed to be driving who aren't paying attention.
RE: Self Driving Uber Fatality - Thread II
Cheers
Greg Locock
New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?
RE: Self Driving Uber Fatality - Thread II
CEOs riding the tests? Wasn't that how the Verrückt testing was done (see other thread)?
A.
RE: Self Driving Uber Fatality - Thread II
The CEO is not to design the test. Others will determine the testing to be done to fully challenge the design.
The issue with Tesla I have is with how the say that the driver has to be fully engaged at all times. It is simply impossible for anyone to within a second or so figure out what the Autopilot couldn't figure out and then take corrective action.
RE: Self Driving Uber Fatality - Thread II
If you are fully engaged, then you will see the car in front of you swerving to avoid hitting the barrier, which your Tesla detected, but doesn't seem to know what to do about it. The issue is that when things are going well the temptation to look at other things is too high.
I think "Autopilot" was an an engineering disaster, in that it should have never been named that, because crap like these accidents will continue to happen for a long time to come. It took the image recognition community 40 years to get to the point of detecting and identifying objects, and even then the performance is nowhere close to perfect. Likewise, people have worked on collision avoidance for nearly 60 years, and it's still not ready for the real world. We keep trying and we keep adding sensors, and some things work really well, but others, not so much.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
Igor Sikorsky is likely the last CEO to have tested his own stuff.
Mike Halloran
Pembroke Pines, FL, USA
RE: Self Driving Uber Fatality - Thread II
(here, p10)
RE: Self Driving Uber Fatality - Thread II
Dik
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
Dik
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
An example: A couple of months ago I was about to tun right at an intersection onto a two lane road. An 18-wheeler from the opposite direction was turning left and decided to give him both lanes for his turn (I know that is unusual in Houston traffic). Then, while turning, a large metal saw horse looking structure fell off the trailer and located itself in the intersection. I managed to turn right next to it and catch up with the driver, get his attention and I pulled in front of him and stopped. Walked up to the cab and explained what happened. He walked back to the intersection to look at it and I drove on home.
Surely stuff can fall off trailers driven by computers but how do you alert them to such problems.
I suspect that there are drivers who will outperform a computer when situations are "non-standard". There are also a lot of bad drivers out there. We don't do a good enough job of dealing with bad drivers.
RE: Self Driving Uber Fatality - Thread II
There's more than just self driving. There is being aware of what is going on around.
As a friends dad once said, 'that flat tire isn't going to change itself'.
RE: Self Driving Uber Fatality - Thread II
It certainly would be good if the sensors can detect such things, but that's almost a separate layer of requirements.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
Sadly, many defects that affect how cars operate don't make noise prior to failure - like on my car where the engine dropped dead on the highway two lanes from the shoulder in rush hour traffic; I managed to coast it, dodging other traffic. Sadly it was covered under warranty so I didn't get any notice about what caused it, but I'm pretty sure it's the same thing that killed it again out of warranty a year later - corrosion under the ignition module where a two-letter maker thought using dissimilar metals was a good idea.
RE: Self Driving Uber Fatality - Thread II
His diagnosis was instantaneous. The horn button was stuck on.
How much would you pay to provide a supervisory listening device with AI on every car?
Uncle Sam doesn't care what you think; if those idiots in DC decide you need it, it will be mandatory, no matter the cost to you.
... As has already happened.
There's no putting the sh!t back in that horse.
Mike Halloran
Pembroke Pines, FL, USA
RE: Self Driving Uber Fatality - Thread II
Cheers
Greg Locock
New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
I think that's precisely what the AI in the OP did; "No obstacles here!"
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
The Tesla (this thread keeps wandering) AI kept demanding the driver to take control, which the driver failed to do.
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
In my scenario above, if you and the robot do not hit the brakes, you will reach the child in 2.8 seconds. If you are not paying attention and the robot signals that you must take control, I figure that you have one second to figure out what is happening and decide what to do. The reaction time of young humans, 100% driving the car and paying attention, is 0.75 seconds. Try looking at 2.7 second passing by on your watch.
The problem may not be a child on the side of the road, it may be a loss of traction for some reason. I wonder how easily a robot can spot potential black ice.
The mechanic is not relevant. I cannot believe you will be allowed to fix an AI car.
--
JHG
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
drawoh - the debugging capability can be added to any car. It doesn't require the ability to steer. As I originally mentioned, this is a terrible problem for deaf people as it is. The same could be extended to household appliances which the deaf also interact with. I don't see a particular difficulty in using spectral analysis in conjunction with ODBII data to link a mechanical output (sound) to a mechanical input (particular engine rotation position, exhaust valve opening, wheel rpm, fan rpm, et al.) I expect there's insufficient upside for makers to do this as most consumers will ignore anything that doesn't prevent the car from moving.
Also, an autonomous car AI would not detect a collision and then alert the driver to do anything. In the case of the Tesla, the driver had exceeded the hands-off time and was being alerted to pay attention or at least touch the steering wheel. Scant attention is paid to CALTrans failure to replace a one-time-use crushable barrier after it was previously used, failed to add rumble strips to alert non-AI drivers, failed to add no-cross heavy stripemarking, failed to maintain lane striping; all considerations due to the number of times that particular feature had been hit by a non-AI driver.
RE: Self Driving Uber Fatality - Thread II
This comes back to my point that there are not six levels of autonomous control of automobiles. There are two.
- The car has no controls other than a keyboard and/or microphone. The robot is in control.
- The human is in control, operating the steering wheel and controlling the accelerator and brake. If there is an accident, the human is responsible. The robot is a back seat driver, able to ring (buzz?) alarms and jiggle controls.
If there is any emergency in which the human must take control, they must be looking out at the road, gripping the steering wheel, and having access to the accelerator and brakes. The reaction time is no more than a second.The robot knows how to find the passenger's destination. The robot is able to dodge other vehicles, bicycles, pedestrians, children, pets, tree branches, Bambi and Bullwinkle. Unless the robot operates in a restricted environment, there will always be uncontrollable, unpredictable agents on the road that must not be hit. If the robot causes an accident, the manufacturer is responsible, which is why I think robot cars will be a service, rather than a consumer possession.
--
JHG
RE: Self Driving Uber Fatality - Thread II
"Schiefgehen wird, was schiefgehen kann" - das Murphygesetz
RE: Self Driving Uber Fatality - Thread II
I agree as well. It seems like everyone is very eager about the first type of vehicle (fully autonomous), but the "jiggle controls" part is very valuable - the computer is much better at bringing car back into normal driving mode after a slip than a human.
A basic "drive-by-wire" system should not look like something that can drive car on its own... but it should provide active help in avoiding sudden obstacles (particularly make a decision about going around an obstacle with regards to cars that may be approaching at higher speeds from behind where driver does not have full attention). Plus it could compensate for much erroneous input that would otherwise cause a slip in bad conditions.
RE: Self Driving Uber Fatality - Thread II
Cheers
Greg Locock
New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?
RE: Self Driving Uber Fatality - Thread II
I agree. If you are responsible for your robot not causing accident, keeping it well within its limits is one of your strategies. Shutting down on a country road way out in the middle of now where sounds like a disaster. Choosing to not operate at all in adverse conditions like ice storms is a better idea.
--
JHG
RE: Self Driving Uber Fatality - Thread II
https://www.engineering.com/Hardware/ArticleID/167...
Roopinder Tara
Director of Content
ENGINEERING.com
RE: Self Driving Uber Fatality - Thread II
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
should be (eye-safe, infrared)
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
Dan - Owner
http://www.Hi-TecDesigns.com
RE: Self Driving Uber Fatality - Thread II
If the car has LiDAR and a robot, you can save 100-250lb of the driver.
The only problem I see with LiDAR in car racing is it that its limited range limits your speed. Otherwise, you have a controlled environment, lacking bicycles, children and Bambi and Bullwinkle. If all the cars are robots, there are no ethical issues about hitting stuff. Battlebots could be a whole lot of fun, at least until the robot overlords take over and accuse us of murder.
The big problem facing a robot car is traffic. Traffic is unpredictable. Robot traffic will provide all sorts of non-specular reflection of LiDAR wavelengths that your LiDAR will have to filter out.
--
JHG
RE: Self Driving Uber Fatality - Thread II
"Robot traffic will provide all sorts of non-specular reflection of LiDAR wavelengths that your LiDAR will have to filter out"
Solutions already exist for things like that; that's how rolling code garage door openers came to be.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
"Solutions already exist for things like that; that's how rolling code garage door openers came to be."
Not really the same thing. LIDAR uses millions of laser beam pulses bouncing back from objects. What happens when there are several LIDAR systems operating in the same area, all bouncing laser beams off of their surroundings? How does each LIDAR sort out its laser beam reflections from the others?
RE: Self Driving Uber Fatality - Thread II
One of the bigger problems is for microwave systems that have much larger viewing angles.
I suppose the ultimate answer will be what Ethernet did for communications over co-ax. Each Ethernet adapter would listen for any ongoing traffic and would start transmitting when it was clear. If two adapters started at the same time the conflict would be detected and they would stop for a random interval before retrying. This might be milliseconds of delay, so just a few inches of vehicle travel.
RE: Self Driving Uber Fatality - Thread II
Moreover, as I stated, there are a variety of existing solutions to co-channel interference, such the pseudo-random number codes used on GPS, or the Tri-Service code used for laser designation systems such as the Apache Target Acquisition and Designation Systems (TADS). As you might imagine, if you fired your Hellfire missile at a target you designated, you wouldn't want it to get distracted by someone else's laser designation for their own Hellfire. Which is why there are hundreds of different codes that Allied forces can use and not interfere with each other. Additionally, the Tri-Service code that the Hellfire uses also has a rolling code that's comparable to garage door opener rolling codes called Pulse Interval Modulation (PIM) codes. There are billions of possible PIM codes. The receiver basically uses a temporal matched filter to look for the code that it's programmed for, and ignores other pulses from other emitters with different codes.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
Cheers
Greg Locock
New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
Cheers
Greg Locock
New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?
RE: Self Driving Uber Fatality - Thread II
"After this training period, all of the subjects were asked to make quick decisions in several tasks designed by the researchers. In the tasks, the participants had to look at a screen, analyze what was going on, and answer a simple question about the action in as little time as possible (i.e. whether a clump of erratically moving dots was migrating right or left across the screen on average).
In order to make sure the effect wasn’t limited to just visual perception, the participants were also asked to complete an analogous task that was purely auditory.
The action game players were up to 25 percent faster at coming to a conclusion and answered just as many questions correctly as their strategy game playing peers.
“It’s not the case that the action game players are trigger-happy and less accurate: They are just as accurate and also faster,” says Daphne Bavelier. “Action game players make more correct decisions per unit time. If you are a surgeon or you are in the middle of a battlefield, that can make all the difference.”
The neural simulations shed light on why action gamers have augmented decision making capabilities.
People make decisions based on probabilities that they are constantly calculating and refining in their heads, Bavelier explains. The process is called probabilistic inference."
Dik
RE: Self Driving Uber Fatality - Thread II
One of the things that brains seem to do is to create massive 3D and 4D simulations. I have from time to time, needed to get a glass I've left on a table but don't care to turn on a light. Even though I've left the room, remembered I wanted the glass, and come back to pitch darkness, I can bring my hand to with 1/4 of an inch centered on the glass. And I can do that because I have a simulation of the entire path I took and the table and the glass and the memory of where I last put the glass down.
Getting a data structure that is suitable for that would go a long way to making robotic driving a reality.
What's amazing is that this process has to be enabled in some pretty small animals. Small birds can build up a 4D simulation to allow them to fly at fair speeds through forest, avoiding trees and vines based on stereoscopic vision. Even wasps and bees have some location memory - for a couple of years I had cicada killers drilling holes in the back yard and if I moved a leaf a few inches from the burrow they would have difficulty finding their latest hole, scanning back and forth to reacquire the position. They did, because leaves move, so they are prepared. (Cicada killers are cool, and completely harmless, but look like they could kill.)
RE: Self Driving Uber Fatality - Thread II
If your garage door opener takes a full second to identify your controller, you won't notice. If I am using a laser to paint a target for my Hellfire missile, is there any reason to pulse the laser other than to provide a unique signature.
In my analysis above, the LiDAR is scanning at 10Hz, and the laser is firing at
1.5MHz1.0MHz. Whatever you do to ID your signal must work in less than a micro-second. A laser that pulses in Giga-Hertz gives you the possibility of a signal, but the back-scatter is not that simple. For example, if the laser hits a bicycle wheel, you will get scatter from the wheel, and from whatever is behind the wheel. Scatter from an angled surface will be blurred a bit. This is okay for determining range. It could be a problem for sorting out Giga-Hertz data.You are looking at 150,000 spots every scan. Given that number, there is a high probability you will see some laser spots from the car next to you.
--
JHG
RE: Self Driving Uber Fatality - Thread II
Bear in mind that this is not like RF co-channel interference; in order for anything to happen at all, two lidars must hit exactly the same spot within 2 us of each other to have anything happen at all. Assuming the 4E-5 duty cycle as a random probability, the probability of both occurring simultaneously is essentially on the order of 1.6E-9, which means that two systems would have to be scanning the same area for hours to have any realistic probability of interfering.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
I think that might be my new sig.
Cheers
Greg Locock
New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?
RE: Self Driving Uber Fatality - Thread II
Not disputing the claim, can you tell me how you arrived at this value? Personally curious... one of my failings since childhood...
Dik
RE: Self Driving Uber Fatality - Thread II
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
Dik
RE: Self Driving Uber Fatality - Thread II
RE: Self Driving Uber Fatality - Thread II
Much of what I see as traffic problems are due completely to just plain bad driving and habits of humans; traffic would move much better if all cars behaved consistently. People drive at random speeds, both individually, and within seconds.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
Cheers
Greg Locock
New here? Try reading these, they might help FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm?
RE: Self Driving Uber Fatality - Thread II
If this is your metric, autonomous vehicles are safer by an order of magnitude.
The average American gets into an accident every ~175000 miles.
Uber has accumulated more than 2 million miles of autonomous testing.
There have not been 11 accidents, that we're aware of.
In my opinion whether or not an accident results in a fatality is an extremely high variance event.. meaning that with regard to fatalities specifically, we won't know if autonomous vehicles are 'safer' for a long time, until billions/trillions of miles have been accumulated. And that is a LONG way off.
RE: Self Driving Uber Fatality - Thread II
Apparently, so do the computers, unless you consider them to be consistently poor. Currently, there have been a shockingly high number of wrecks among the ranks of computer drivers, considering how few total hours they have driving.
"...they randomly decide to step on their brakes, they suddenly swerve across 4 lanes of heavy traffic"
Better than not hitting the brakes when they should and running over pedestrians and swerving into concrete barricades.
"Moreover, humans just get bored and inattentive, as was the case with the Uber backup driver."
Of course they do, especially when they're not actually driving. The "Uber driver" wasn't a driver, he was a passenger.
RE: Self Driving Uber Fatality - Thread II
Look carefully at the numbers I posted above. If a vehicle is travelling at 100kph, a robot can make a fairly aggressive stop in under 80m. This is well inside the range of the LiDAR based on my calculations. The object has been scanned several times, and its velocity and acceleration vectors arew known. I don't think the LiDAR is functional a mile away, but it does not need to be. The LiDAR's range will limit the speed of the car.
If my robot detects an object on the side of the road that is moving out in front of the vehicle, the vehicle must take evasive action. The object can be a child, or a dog, or a garbage can being blown around by the wind. That may be a pile of leaves out there, but your robot does not know what is inside it. Until such time as the robot can reliably identify an undesirable terrorist, there is no excuse for not stopping.
--
JHG
RE: Self Driving Uber Fatality - Thread II
It might be a collision where the automated vehicle is not legally at fault ... but in the real world, the driver (automated or otherwise) who slams on the brakes unexpectedly (to other drivers) in a travel lane of a motorway is the one who actually causes the ensuing collision, even if the one behind (which might be a fully loaded 18 wheeler which cannot stop on a dime) is the one legally at fault.
RE: Self Driving Uber Fatality - Thread II
There is a difference, because hitting a person and driving off is a crime. Where hitting a bird and driving off is not.
RE: Self Driving Uber Fatality - Thread II
Assuming that 2 million miles are real-world miles, the fatality rate for the autonomous Uber cars is still about 50 times the national average human drivers.
"The average American gets into an accident every ~175000 miles."
I'd like to see where you got that number, because according to the NHTSA in the US in 2012, there were 30,800 fatal crashes (33,561 fatalities), 1.634 million injury crashes (2.362 million injuries), and just under 4 million "property damage only" crashes. Total vehicle miles traveled (VMT) - just under 3 billion Link. That's 1 fatality for every 88.5 million VMT, 1 injured person for every 1.25 million VMT, and 1 property damage only crash for every 750,000 VMT. Added together, that's 1 crash for every 467,857 VMT, not 175,000, so unless the crash rate has nearly tripled in the last 6 years, you're way off.
RE: Self Driving Uber Fatality - Thread II
Great! So the car will take evasive action into a vehicle in the adjacent lane because a garbage can blew into the street? That'll be popular. Btw, what happens when the child is standing still on the curb until half a second before your robot drives by and then runs into the street? Does your robot anticipate this completely illogical action as most humans would?
RE: Self Driving Uber Fatality - Thread II
Hitting the brakes is evasive action.
How about approaching a building a meter away from your road, at 100kph, with someone posssibly standing behind it? At some point, the robot cannot see around a corner, and it must slow down just in case.
--
JHG
RE: Self Driving Uber Fatality - Thread II
Then perhaps, instead of taking the human's ability to assess, anticipate, and extrapolate out of the picture, and try to replace it with a far less advanced computer "brain", we should put our efforts towards supplementing and augmenting the human driver's ability to see at night or in fog.
The problem with the push for self-driving cars is that they are so far from being a solution. OTOH, if the technology was applied to solving the real problems - the limitations of human vision and inattentiveness, then real progress could be made. Some has already been implemented in a few cars - headlights that turn to follow the road, thermal imaging, blind spot sensors, lane departure warnings, etc. If the efforts were aimed towards helping the driver, rather than replacing him, the roads would become safer. Replacing human drivers with machines that are not up to the task, makes the roads more dangerous. Maybe someday autonomous vehicles will be ready to be on the street, but until they are, they shouldn't be let loose on the unsuspecting public.
RE: Self Driving Uber Fatality - Thread II
Ok, my number was wrong. If we use yours, Uber's cars are still 'safer' by a factor of 4 or 5 instead of 10.
One pedestrian fatality does not a trend make.
If this woman had been injured instead of killed, suddenly the numbers for Uber's development program would be somewhere near the national average (1.25 million vehicle miles per injury) and all the hair pulling would be drastically subdued.
The difference between her being an injury and her being a fatality is a matter of a couple of feet one way or the other- by definition, a high variance event. There isn't anywhere near enough data yet to determine either way whether these vehicles are an improvement, and there won't be for a long, long time.
RE: Self Driving Uber Fatality - Thread II
Link
RE: Self Driving Uber Fatality - Thread II
Only if you compare the Uber cars' fatality rate to the overall crash rate, the bulk of which are property damage only (PDO) crashes. We only heard about this one because it resulted in a fatality. How many PDO crashes have Uber cars had? If you can find it, you're better than I. It seems they're being pretty tight-lipped about that. I wonder why?
RE: Self Driving Uber Fatality - Thread II
This is an engineering forum, isn't it?
RE: Self Driving Uber Fatality - Thread II
Specifically, it's the "Engineering Failures & Disasters" forum, where we discuss why and how engineering failures and disasters occurred, so as to reach conclusions about how to prevent future failures and further unnecessary loss of human life. At least that's my understanding of the purpose of this particular forum. To that end, I am pointing out that in this case the failure was putting computers behind the wheel of vehicles on public roadways when they were not up to the task. I could be wrong in my assertion that they will never be, but what is clear is that they are not ready yet.
If the object is to improve traffic safety, at this point replacing the human driver does not accomplish that goal. Maybe it will someday, but not now. Of course, the real aim is profit - getting a machine to do something instead of paying a person to do it. Don't get me wrong, I'm all for automating menial tasks, but driving is not a menial task. It can be a boring task, and one prone to distraction, but it is not simple, and the consequences of failing to do it capably can be, and have been, fatal. Experiments that put the public at risk to further technological advancement are irresponsible. Such experiments done for the sake of profit are criminal, just like opening a water slide that injures and decapitates people.
RE: Self Driving Uber Fatality - Thread II
This is the entire point- you're making that conclusion, which may or may not be correct, based on incomplete information.
I don't disagree with you on a few of your points but promoting a conclusion which is very possibly incorrect, because it is based on a data set which is massively incomplete, is not something I believe that we, as members of this forum, should promote or even tolerate.
We're all engineers, and we all have the same itch to look at a problem and say "I KNOW WHAT THIS MEANS". I get it. But in this case, we don't. We can't yet. Any conclusion based on what we know today isn't a conclusion, it's a knee-jerk reaction.
I don't disagree with that sentiment taken at face value..
But in this case, it is not possible for this new technology to be released without extensive testing in and among the general public. The general public represent a considerable portion of the exact hazards which this system is being designed to avoid- and those hazards cannot be simulated.
Outlawing testing of autonomous vehicles on public roads is exactly the same as outlawing their development entirely. They cannot be rigorously tested any other way, and they must be rigorously tested if they are ever to be brought to market.
RE: Self Driving Uber Fatality - Thread II
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
So in the absence of adequate information to make an assessment, we should just let anyone who cares to, send an AV out onto the roadways and hope they don't kill too many people? Sorry, but I disagree. Public sentiment will ultimately decide where the line is drawn, but that is my view on the subject.
As far as whether me expressing my view should be "tolerated", may I remind you that public safety is one of the core principles of good engineering, so discussion of whether having AVs on the road at their current level of sophistication puts public safety at risk, is very much relevant to the topic and consistent with the aims of the forum.
RE: Self Driving Uber Fatality - Thread II
That is, most certainly, not what I said.
Surely you understand that banning public testing of autonomous vehicles (which is the idea you seem to be supporting. If that's not correct, then correct me) is synonymous with banning them permanently.
There is no other way to test them for public release, than to test them in public. The number of things the system has to interact with are too varied and too nuanced to be accurately repeated in a laboratory, or even in a closed-road environment, to a level where they will be ready for unrestricted use on release day 1.
I couldn't agree more that safety, in the context of whatever particular things we are working on, is absolutely job 1 for all of us.
Making decisions based on incomplete data is rarely a core value of any safety scheme I've ever heard of.
Your opinion is your opinion, and you have every right to have one. You seem to be of the mind that autonomous vehicles capable of real independent operation under all or even most conditions are an impossibility; my own opinion is pretty close to that actually. I'm not advocating that your posts be censored or anything of the sort.. I'm just encouraging you to realize that the data available to you (and me, and everyone else) at this point warrants nothing more than an opinion. Conclusions are a long way off.
But I do think, that in the long term, it is a possibility that this technology can result in safer transportation for all of us. In the short term, that means public testing is a necessity. There's just no way around it.
RE: Self Driving Uber Fatality - Thread II
Roopinder Tara
Director of Content
ENGINEERING.com
RE: Self Driving Uber Fatality - Thread II
I don't agree. Certainly it makes it more difficult and expensive to test them in a simulated environment. However, it is not impossible, just not as economical. Public safety demands that AVs, like any other potentially lethal product, be tested as thoroughly as possible BEFORE being released into the public arena.
"Making decisions based on incomplete data is rarely a core value of any safety scheme I've ever heard of."
There is an old saying we still use in our office - "When in doubt, make it stout". In the absence good evidence of the adequacy of a design, good engineering always takes the conservative approach. As engineers, "Making decisions based on incomplete data" is par for the course. If bedrock could be 40 feet down or 50 feet down, we're drilling 60 feet just to be sure the bridge doesn't fall down.
"I'm just encouraging you to realize that the data available to you (and me, and everyone else) at this point warrants nothing more than an opinion. Conclusions are a long way off."
I stated my opinion; my conclusion; my guess of where current path will lead, and the dangers I foresee in following it. I was not attempting to make a definitive statement, or present it as fact. Obviously, no one knows the future, save God Himself.
Rushing an AV out onto the streets, as Uber did, I believe is irresponsible, and it sure didn't help their reputation or public sentiment towards AVs in general. My political views lean towards the libertarian side, but there is a place for regulation and oversight, and I believe this is one area that needs some fairly stringent ones.
"...public testing is a necessity. There's just no way around it."
AFTER extensive and successful testing to the greatest extent possible in a controlled environment is complete, and then only with clearly marked vehicles. Hey, if human student drivers have to be in marked cars, first time AV drivers should too.
RE: Self Driving Uber Fatality - Thread II
We're pretty loose on definitions... as long as it has technical merit and is interesting, we're pretty flexible.
Dik
RE: Self Driving Uber Fatality - Thread II
This is where we are now, basically. These systems all work great on paper.
How exactly would marking the vehicles have prevented this pedestrian accident, the Tesla concrete barrier accident, the Tesla white trailer incident, or any of the other incidents which have been highly publicized?
I am relatively confident in saying they would not have.
I'll pose the same question I posed earlier in this thread, again:
This knee jerk reaction that many people are having is due to a single pedestrian fatality.
What quantity of fatalities represent an acceptable number to you? If your answer is going to be representative of what will happen in the real world, it cannot be zero. So, how many?
RE: Self Driving Uber Fatality - Thread II
I'm not talking about on paper, or even lab tests giving the computer "brain" simulated inputs. I'm talking about full-scale, outdoor testing of the whole system in the actual car, driving through a mock-up of a city street with numerous moving objects.
RE: Self Driving Uber Fatality - Thread II
Dude. That's been done.
Waymo has been working on test tracks, in the exact mockup situation you're describing, for a decade. That work continues.
It isn't enough. The scenario you don't like (that I don't really either), cars not ready for full release being on public roads, is not possible to avoid.
RE: Self Driving Uber Fatality - Thread II
The number of road miles required to test such a system cannot be realistically dumped onto one car; it requires hundreds cars; there are not hundreds of test facilities in the US. Moreover, trying to create scenarios such as the Tesla collisions with the semi and the median barrier are things that would likely have passed under test conditions, which is what allowed them to think the system was ready in the first place.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
The AV crash profile will always tilt toward the fatalities, and the more strongly it tilts that way the better they're doing. But the more it tilts that way the greater the percentage of fatalities in the overall mix. When they get to 100% fatalities is that better or worse?
At least that's how it seems to me.
RE: Self Driving Uber Fatality - Thread II
Obviously not, but is that a reason to put the public at risk to continue the experiment?
"...cars not ready for full release being on public roads, is not possible to avoid."
Yes it is; you just don't like what it means for this experiment you love so much.
"So, if the AV avoids all the PDO crashes that a human driver would have but has a similar level of crashes at all worse levels is that an improvement or not?"
First off, that's a big assumption. What makes you think that the AVs can avoid PDO crashes any more successfully than the fatal ones? In any case, it would still be worse. A human driver, once proved fatally incompetent (by say, running down a pedestrian), does not get another chance, regardless of how many miles he or she might have under their belt.
RE: Self Driving Uber Fatality - Thread II
Not always true. I was in traffic school with someone who had previously had a fatal accident, and was driving again when he was cited for some other traffic violation.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
If it was truly an accident, and not due to his incompentence, that is understandable. However, if he displayed the kind of poor judgement we've seen from some AVs, his license should be permanently revoked.
RE: Self Driving Uber Fatality - Thread II
If you think I 'love' this 'experiment' than you need to work on reading comprehension.
I am not convinced that the true autonomous network of cars that people like Elon Musk envision is possible at all, let alone possible with current technology. I think the jury is still out, but if I had to guess my opinion shades towards true level 5 capability being, at best, a long way off. Decades off.
The difference between us is not our level of love or hate for autonomous vehicles... it is whether or not we think the technology justifies the testing required to see if it is actually viable.
RE: Self Driving Uber Fatality - Thread II
Agreed. That is the sticking point. I don't think it is justified at the current state of the technology, given the risks to the public, especially when most are ignorant of those risks. The problem for those of your view is that even if you win the intellectual argument, you may lose the PR battle. It doesn't take many fatalities to turn public sentiment, especially when the risk of that happening has been downplayed to the point where most generally believe it's not supposed to happen.
Most people, I suspect, were like me, not realizing that AVs were even on the road. After the Tesla incident with the truck, the company stated in strong terms that the "autopilot" was not an autonomous driving system, but a driver assist feature. I had no idea other AVs were even on the roads, and hiding the fact that they are does not help the public perception, especially when the first we hear of it is a fatal crash.
RE: Self Driving Uber Fatality - Thread II
Well, obviously that's the bigger question, but it doesn't yet matter much when the cars are driving into things that should be easily avoided....
That's wishful thinking....
RE: Self Driving Uber Fatality - Thread II
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
From movement of an object toward the path of the vehicle to application of the brakes, is the AV reaction time better or worse than an attentive human driver?
RE: Self Driving Uber Fatality - Thread II
It's not like she did not know, like in the case of a young child.
That the AV did not see her is a problem as this could have been something else it missed.
If the problem with humans really is being board, then would we expect a higher ratio of rural accidents than in cities? And would this not be reversed for AV's because of too much input? Or is the input problem also a factor for humans?
RE: Self Driving Uber Fatality - Thread II
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
As for this pedestrian, while they were certainly not supposed to be crossing the street at that location, in California, the pedestrian has the right of way, even if they are doing so illegally. The fact that the pedestrian was at fault for jaywalking does not alleviate the responsibility of the driver to yield to the pedestrian, much less hit the pedestrian.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
Legal "fault" with Uber's collision may lie with the pedestrian who jaywalked (edit: simul-post suggests that California may give the pedestrian the right of way regardless of jaywalking). But drivers, including self-driving vehicles, also carry some responsibility for avoiding collisions that are legally the other driver's fault. I've already mentioned in this thread, the inadvisability of slamming on the brakes in a full-speed travel lane of a motorway with a fully loaded dump truck filling your rear view mirror. Doesn't matter human driver or automated one. Doesn't matter that getting hit from the rear is legally the responsibility of the driver behind ... the occupants of your vehicle are still flat.
Humans vary widely in their ability to prevent situations that result in a collision. Some people have good situational awareness and take intentional actions to prevent potentially dangerous situations from developing before they happen ... others focus narrowly on the path directly in front of them and are oblivious to any other surroundings or on what's in their mirrors. Good drivers will spot the other vehicle in a lane about to merge with their own and will pro-actively speed up, slow down, or move over so that the other driver can merge without conflict. Others ... not so much. Self-driving? Who knows!
Humans get bored when there is sensory deprivation or when they don't have anything to do. In the case of road traffic, humans certainly can get bored, and it doesn't take babysitting an automated vehicle to do it. Take a long, straight road with few features along it and impose a 55 mph speed limit and strictly enforce it, and you'll find out that arbitrary reductions in speed limits don't necessarily correlate with reducing the number of collisions. Reason ... people get bored, and nod off, or start doing other things (playing on the phone, etc). An automated vehicle won't get bored in those circumstances ... but you can be pretty much assured that the babysitting "driver" will be sleeping.
Humans can most certainly get overwhelmed, too. They don't multitask very well. They may be so fixated on making sure they don't hit the pedestrian who looks like they might be stepping off the curb that they miss the fact that the traffic signal ahead of them has turned red.
RE: Self Driving Uber Fatality - Thread II
Just to touch on this subject, the picture below shows the main road adjacent to my community. The speed limit is 45 (55-60 is the norm). All of the houses (+/- 500 units) are to the SE. There are none on the NW side of the road.
The red circle is the bus stop, which are spaced evenly along the road every 500' or so. That is 140' curb to curb. The nearest crosswalks are 1/2 mile in either direction.
Don't even get me started on the turning movements and different ways right-of-way is perceived through those intersections. That causes more peril than the traffic moving along the main road.
RE: Self Driving Uber Fatality - Thread II
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
That is exactly the point, though.
Autonomous vehicles will ALWAYS drive into things that *should* be avoided.
Reaching some point where autonomous vehicles have zero accidents of any kind in trillions of miles driven per year is a statistical impossibility. It will not happen, ever.
The question you have to answer, if you're in favor of automated vehicles, is: what accident rate are you willing to accept?
If your answer is precisely zero, we will never get there. It just isn't possible.
If your answer is "a lower statistical rate than humans in the aggregate", the technology today is such that it may be possible right now, but we don't really have enough data to know yet.
If your answer is "a lower statistical rate than some subset of humans that have the least accidents or the least damaging accidents" then the technology today might still be there already, or might not be, or might never be. We, again, don't have enough data to know, and having enough data to know with a high level of statistical certainty is many, many years in the future based on current rates of miles accumulated per year.
RE: Self Driving Uber Fatality - Thread II
You ignored the "easily" part of the previous statement. The pedestrian death should have NEVER happened, i.e., there should be ZERO probability that specific set of conditions with that pedestrian should result in any collision, short of an outright failure. Now, it's certainly possible that the Uber incident was the result of a serious failure, but we probably won't know for a while.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
Now, if it were just people waiting in line, you'd almost never see people just cutting; so I see this as a symptom of a depersonalization of other drivers, because being rude and obnoxious to a mechanical box doesn't rank anywhere near the level of being rude to an actual, visible, human. So that's something that AVs could eliminate. The AVs would also make the transition to the toll road cleaner and reduce the congestion, since AVs wouldn't slow down going up the grade on that part of the 91 and the traffic would flow better.
Better flow and fewer exhibitions of psychopathic behavior --> less road rage.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
I understand what you're saying I just think that "easily" has no effect on the end result. The ideal case is zero accidents. As engineers we know that's not possible; even confining accidents to hardware-failure-only incidents will not drive the total to zero.
There should be ZERO probability that a plane could crash and kill its passengers, yet this happens with relative frequency. This doesn't cause us, as a whole, to stop buying plane tickets.
The 'outlaw all testing of AVs on public roads' reaction to this accident is like if the reaction to the early crashes of the DeHavilland Comet led us to abandon airplanes.
RE: Self Driving Uber Fatality - Thread II
OK, to put it bluntly, the 3 incidents causing death, driving under a truck trailer, into the end or a road barrier or into a person, are NOT acceptable to me. They illustrate the level of improvement still required.
As a general rule, if the capabilities of the car mean it is capable of avoiding something, then that something should be avoided. If the brakes are capable of stopping it before the impact, then it should have stopped. If the tires and suspension are capable of an avoidance maneuver then it should have maneuvered to avoid the accident.
If it is put into a situation where it can't brake or maneuver to avoid, then it should try to minimize. But, it should have some intelligence to recognition a situation that could become impossible to survive intact instead of blissfully driving until the situation presents itself. In other words, take steps like slow down or quit driving beside another car in the next lane when caution is called for.
The developers of these systems have been touting how they will avoid making the stupid mistakes a human does. So, they'd better quit making those mistakes if they are to be acceptable.
RE: Self Driving Uber Fatality - Thread II
Airline crashes killing the passengers happen with regular frequency? Not in the US. The last US-registered, scheduled passenger airline to crash was Colgan Air Flight 3407, which crashed on Feb. 12, 2009, killing 50.
That is due, in large part if not entirely, to extensive regulation. The exact opposite of what AVs being tested on public roads are subjected to.
RE: Self Driving Uber Fatality - Thread II
Regulations about aircraft were written in response to accidents. Nobody anticipated any of this stuff. I would say that unless the robots are shown to be more dangerous than an average driver, or perhaps a 15th percentile driver, they should be allowed to continue. One of the problems with American or Canadian roads is that new communities are built around cars. If you take someone's driving license away, they cannot travel or work or to anything else. The robots are an opportunity to take really bad drivers off the road.
--
JHG
RE: Self Driving Uber Fatality - Thread II
These types of accidents will never stop happening.
They won't. Period. It's impossible.
3 accidents isn't the point. 3 accidents against the number of successful detection/avoidance events (likely, at this point, to number in the hundreds of thousands at least across all companies testing AV tech)is the actual metric that matters.
We don't know the value of that metric.
RE: Self Driving Uber Fatality - Thread II
Robot cars always will have to deal with unpredictable (i.e. non-robot) elements along the right of way. My guess is that the very last people who will be pried out of their cars will be those psychopaths you mention. I drive differently on the highway when I notice someone four feet behind my bumper. I anticipate crazy lane changes. The robots will have to cope with this too, and the serious psychopaths probably will learn the game the robots' behaviour.
Is there a safe way to program a robot to operate in Asshole mode?
--
JHG
RE: Self Driving Uber Fatality - Thread II
Are you claiming there is/was no proactive regulation of aircracft/airlines? That's quite a stretcher.
I think there is a place for limited, controlled testing of these vehicles. But both Uber and Tesla have shown themselves to be negligent in little (Tesla) to no (Uber) basic monitoring of driver attention in what are only intended to be supplemental assistance level automation systems.
And to touch on your last point, to make a generalization, the sorts of people who are having their licenses away are not the sort who can afford to run out and buy the latest and greatest robotic car.
RE: Self Driving Uber Fatality - Thread II
I didn't say airline.
Per the FAA, in 2017 there were 209 incidents and 347 fatalities.
The statistics aren't the point.
The point is, if someone's only acceptable criteria for AVs to be present on public roads is that they will never cause a fatality of any kind under any circumstances from from now until the end of time, that person has a wholly unrealistic point of view with little connection to reality.
RE: Self Driving Uber Fatality - Thread II
This is inaccurate. Uber's intent is level 5 automation of all vehicle operations.
This is also inaccurate, and pretty insulting too.
There's a lot of lawyer/doctor types out there with a couple DUIs, no driver's license as a result, and enough money in the bank to buy a fully automated Ferrari if they released one tomorrow.
RE: Self Driving Uber Fatality - Thread II
This, my friend, is the text book argument of a straw man. Is any one in this thread making that claim? Then why are you arguing against it?
Their intention is irrelevant as it pertains to what they are actually operating on public roads. Of what use is a backup driver if there isn't even a basic system in place to ensure they are paying attention? What they are operating is Level 2 at best. They are negligent for not having simple safeguards in place.
Come ride the bus around the suburbs with me. What do you think the relative percentage of passengers are who are lawyers with a couple of DUI's who can't drive anymore.
I can tell you're not engaged in rational discourse anymore.
RE: Self Driving Uber Fatality - Thread II
For example, it was typical on commuter twin turboprops to keep the far engine running while passengers were loading. Until a little girl lost hold of a stuffed animal and it blew through the couple foot gap between the fuselage and the pavement. The little girl ducked under to retrieve it before anyone could stop her and now, AfAIK, it's no longer allowed. The NTSB report indicated the change in procedure was at the airline level, not the FAA level, but there may be other guidance. The report indicates it wasn't a fatality, so maybe that's why there's no rule.
RE: Self Driving Uber Fatality - Thread II
If you want to nitpick, you're not helping yourself. Per the data set for 2016 (latest year available from the NTSB), in that year there were 39 accidents which resulted in 27 fatalities involving aircraft operating under 14 CFR 135. Approximately 3 crashes and 2 fatalities a month is pretty regular. Those people bought tickets.
You're missing the entire point. Being condescending isn't helping your argument, either.
If you think rich people are better drivers, I guess you're welcome to go on thinking that. It's a pretty weird thing to say but it isn't really germane to this conversation.
This, my friend, is not a straw man- it's actually what's called a reductive argument, directed at positions taken by other people in this thread. Google it.
RE: Self Driving Uber Fatality - Thread II
I agree that accidents lead to regulation. It would be irresponsible for that not to be the case.
But that argument has no bearing on the fact that there is proactive regulation as well intended to prevent accidents from occurring in the first place.
Regardless; if accidents are to be the driver of regulation, where is the requirement that all AVs undergoing testing on public roads have active systems for aggressively monitoring backup driver attention (see the Cadillac "Super Cruise" system for how this could be implemented). That's not a high bar in these vehicles that are already extensively outfitted with sensors and computers.
RE: Self Driving Uber Fatality - Thread II
Before I google "redactive argument", please quote me the people in this thread who have taken the position that "for AVs to be present on public roads is that they will never cause a fatality of any kind under any circumstances from from now until the end of time"
Maybe I missed it, friend.
RE: Self Driving Uber Fatality - Thread II
A reductive argument is one made by taking a position, which may or may not appear to be logical, based on a fallacy to its logical conclusion in order to highlight the underlying fallacy.
So... No one is stating that explicitly, but it's the logical conclusion of a position taken, based on the fallacy that zero accidents is an attainable goal.
RE: Self Driving Uber Fatality - Thread II
Ok. Then support that with quotes again. Who has taken the position that "zero accidents" is the attainable goal?
You appear to be the only person to be bandying "zero accidents" about; and then arguing against it. That's the straw man.
RE: Self Driving Uber Fatality - Thread II
Zero accidents is the reductive portion of the argument.
Maybe just google it and come back after some light reading.
RE: Self Driving Uber Fatality - Thread II
Igor Sikorski built an airliner in Russia just prior to WWI. One fine day about a century ago, someone offered to transport someone else from "here" to "there" for some sort of fee. The pilot almost certainly was licensed. The aircraft almost certainly was fabric covered and had one engine. The rules came in when these things started crashing.
If people are allowed to own robot cars and are not allowed to drive, the psychopath drivers will be replaced by people who want to re-program the robots. There will be accidents. As noted above, I think robot cars will be a service, not a possession.
--
JHG
RE: Self Driving Uber Fatality - Thread II
The notion of zero failure is ludicrous, as we, as a society, have "acceptable" risks for everything we do, including our cars, planes, buses, etc. The people that died on a bus on the way to Las Vegas took what they thought was an acceptable risk. Obviously, after the fact, the survivors and family have a different perspective. Nevertheless, probably all of them would get on a similar bus for a similar trip in the future.
Anything humans touch or build automatically incurs a certain level of risk of failure, and in some cases, such as the Colombian bridge disaster, the risk was both tangible and realized, and two engineering analyses point to a massive design failure. Toyota had a failure of their automobile ECUs that resulted in accelerations that couldn't be turned off or stopped. The electronics industry, as a whole, gave up complete testability, even at the basic "stuck at" logic levels, because there were so many hidden nodes that the test times required to access them all would result in years of testing.
Software testing is worse, in some ways, because there's not yet been a systematic way of testing, even at the module level. Intent and specification often cannot be rigorously verified.
What we do need to do is to determine what the acceptable level of risk is and move on. Certainly, those who are actually working on AV software need to study each and every one of the AV accidents to determine how to prevent them from happening again. That's been the model in the airline industry for decades, going back to the DC-10 engine failures that were traced to a less than desirable method of installing engines that American Airlines once used.
While there have not been many collisions of commercial aviation, there still have been deaths, most recently on a 737, where a turbine blade broke loose, tore through all the surrounding armor bounced about 4 times along the wing and fuselage and took out a window that resulted in the death of a passenger. That incident, which should have an unthinkable possibility, is now a realized risk, and there's going to be a bunch of engineers trying to quantify the likelihood of that happening again. Nevertheless, 737s are still flying around at the moment, albeit, subject to more detailed inspections for indications of similar and imminent fatigue failures. We've accepted this sort of thing as an acceptable risk, even with the human element in the entire maintenance and inspection process.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
RE: Self Driving Uber Fatality - Thread II
I googled "redactive argument, by the way. Nothing comes up
RE: Self Driving Uber Fatality - Thread II
Surrender? You might be taking this a little personally.
RE: Self Driving Uber Fatality - Thread II
Exactly. Thank you for stating the position I was attempting much better than I did.
RE: Self Driving Uber Fatality - Thread II
We don't know the value of that metric."
No, we don't. There has been a conspicuous lack of transparency when it comes to how AVs have performed in real-life situations. In the case of Uber's experiment, we do know that the human "backup driver" (read "passenger") had to override the computer every mile on average to correct a critical incident. If you were riding with a human driver and you had to take the wheel every mile, you you ride with that person again? Uber's system in particular is obviously not ready to be on the streets yet.
RE: Self Driving Uber Fatality - Thread II
We're back where we started.
Uber can't improve the system without it being on the streets. It's a chicken/egg problem. Or a catch-22. Or however you want to phrase it.
RE: Self Driving Uber Fatality - Thread II
Well then, they should abandon the project and leave the AV development to those companies who are willing to go to the effort and expense of thorough testing before introducing a potentially lethal machine into the public arena.
RE: Self Driving Uber Fatality - Thread II
Other companies have different controls in place, but all of them are going to have accidents.
Still a catch-22.
RE: Self Driving Uber Fatality - Thread II
Dik
RE: Self Driving Uber Fatality - Thread II
And to add to that the sheer number of cars on the road at some times.
A possible solution is companies open or start at different times. Say 7:15, instead of 7:00, or 7:45 instead of 8:00.
But self driving cars won't fix the number of cars on the road, but is likely to increase the number.
Self driving cars also won't fix some road rage, but make it worse, as some people will choose to drive themselves so they can travel faster (they are always late). In fact self driving cars are slower that cars driven by many other drivers.
For self driving cars to be safer, it may take us a redesign of many things, including locations of bus stops, truck allowed colors, jumbled lane markings, etc.
And if the truth be told, the cost of mass transit, is a major issue in the number of cars on the road. As well as dirty conditions, rude people (lack of respect), and the number of people all trying to get someplace at the same time.
RE: Self Driving Uber Fatality - Thread II
--
JHG