dcasto:
I am more familiar with the Bhopal accident, so my comments will relate to that.
You are correct that the SIS did not cause the accident at Bhopal - it never had a chance to - it was turned off because someone saw a way to save some money. Recommendations for further mitigation measures were not instituted. There were several root causes, as is the case in most accidents, that stem all the way back to the conceptual design stage of that project. Training was an issue as well. All layers throughout the project and facility must work together in order to reduce the likelihood. In this case, if one layer of protection was operating correctly, then the accident could have been avoided. From what I have read, the operators did everything they could to prevent it.
Bhopal is a case where several layers of protection did not work (for varying reasons) at the same time. I have referred to it as an accident but it was a catastrophe that could have been avoided.
One point to note - if you give a human a choice in a stressful and critical moment, odds are he will make the wrong decision. A statistic I heard the other day is that NASA trains their pilots for 3 days straight before a mission and still only expect the correct decision to be made less than 50% of the time.
The single and specific purpose of an SIS is to monitor the process for deviations and place it into a safe state before a catastrophe occurs. I must also state that we should not be putting in SISs at will, they must be considered only after all other avenues have been exhausted. Smaller is definitely better.
There are several reasons for segregation between regulatory and safety systems:
1. Single point of failure
2. Common Cause
3. Regulatory typically do not monitor the circuits and states of safety critical functions. Safety critical functions are dormant - hopefully for years, but we must know that they will work when we need them to.
I know of several SCADA systems that do not provide the level of risk reduction required. I hear of failures in ESD equipment all the time and the owner insists on fixing the symptoms rather than going back to the root cause. If there is an accident on that line, I know that the SCADA system will be a major contributing factor. Because they are not following a PSM program, they turn a blind eye to the possibility of failure. (What to do about it is for a different post - ethics).
One of the root causes of Texas City was operational error that could have been avoided if the safety system was operational. A root cause of the Westray Mine Accident in Canada was owner neglect for applying a layer of protection (dust control). There are numerous others. My point is that while human interaction is important, it cannot be the only layer of protection. We forget, we get distracted, we get tired, we get replaced at the end of a shift, etc. Several layers, as many as possible, need to be in place in case of a failure in one - not just the human factor either.