Why Do Students Choose Their Own Paper Topics In College But Not In High School
Friday, April 17, 2020
Pathogens free essay sample
Analyse Barry Turners ideas on Pathogens and critically evaluate how pathogens could lead to a large scale disaster. In your discussion you are encouraged to investigate the thoughts of other leading authors on root cause analysis and how these compare and contrast to Turnerââ¬â¢s ideas. (2996 words, including diagrams). In the course of this paper I will assess Barry Turnerââ¬â¢s ideas on pathogens in his manmade disaster model, whilst evaluating its practical relevance compared to root cause analysis, using historical events to provide context and support my conclusions. Much of our contemporary basis for disasters having a social, as well as technical origin was precipitated by the ââ¬Ëman-made disaster modelââ¬â¢ of Barry Turner (Turner, 1978; Turner, 1994; Turner and Pidgeon, 1997). His work stipulated the presence of a social factor inherent in accidents, generally due to the complex nature of their harboring systems. This work has since been built upon in both US and European contexts (Vaughan, 1990; Toft and Reynolds, 1997), two of such developments being Perrowââ¬â¢s (1984) normal accident theory and Reasonââ¬â¢s (1990) Swiss cheese model. We will write a custom essay sample on Pathogens or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page This body of literature and its subjective approach to identifying risks are often said to adopt a ââ¬Ësocio-technicalââ¬â¢ systems view. Crucially this lens of system design and management recognises the need to broadly consider both the technical and social factors at play in disasters (Cherns, 1987), in contrast to objective methods which are deemed to overlook the ââ¬Ësocioââ¬â¢ element. As such, those concerned with maintaining control within their organisation must consider three channels of control: both managerial and administrative as well as technical (? gure 1). (? ure 1) Essential to the socio-technical framework is recognition that conditions for disaster do not arise overnight but instead ââ¬Å"accumulate over a period of timeâ⬠during an incubation period (Turner and Pidgeon, 1997, p. 72). In this time a con? uence of preconditions known as Pathogens interact with one another. It is important to now highlight two distinct features of socio-technical a nalysis. Firstly, it is the accumulation and interaction of such pathogens which foster disaster, when each independently is unlikely to result in a similarly extreme outcome. Secondly and of similar importance is the axiom that disasters are a ââ¬Å"signi? cant disruption or collapse of the existing cultural beliefs and norms about hazardsâ⬠(Pidgeon and Oââ¬â¢Leary, 2000, p. 16). Synthesising these two points then, such incubation periods occur when a series of small events, discrepant with the existing organisational norms occur and accumulate unnoticed. Disasters are then precipitated by a trigger event, which, in light of the build up of pathogens to a critical level leads to a catastrophe. In his paper, Turner (1994) identi? s two distinct trends which can be seen as symptomatic of pathogen build up in complex systems; sloppy management and unsound system design, both of which we will now explore. Within the umbrella issue of poor management I have collected Turnerââ¬â¢s thoughts and identi? ed speci? c precondition enablers. Fore-mostly is the issue of information mis-use, but speci? cally information asymmetries. Such asymmetries might arise when individuals throughout the hierarchy fail to pass on and reveal information, whether deliberately or otherwise, whilst often the information is mistakenly passed to those who cannot effectively use it. Importantly, there are also cases of deliberate disregard for information as we will see shortly. Signi? cantly this information issue is enhanced as often it cannot be readily identi? ed. Agents operating within the environment believe it is normal and acceptable for such a ââ¬Ëdegraded stateââ¬â¢ to prevail (Weir, 1991). If this is the case, it precludes the possibility of pre-condition neutralisation, as it becomes impossible to recognise the behavior as divergent, as it is no longer discrepant with the organisational norm. Turner (1994) goes on to postulate that an ef? ient operating state of information cannot in-fact exist, as the balance it calls for is impossible to reach. Surely, whilst too little information fosters the so called ââ¬Ëdegraded environmentââ¬â¢, we could envisage that too much information would result in equal, if not greater inef? ciencies due to a saturated environment underpinned by the bounded rationality of agents within the system. Concurren tly, we ? nd certain system structures which are known to be particularly susceptible to this dif? culty. The literature has often spoken of the dif? ulties that can arise through information friction and overload where we ? nd hierarchal rigidity, such as a bureaucracy (Simon, 1947), with alarming information failures in the FBI, DOJ and CIA preceding the 9/11 attacks (Kramer, 2005). Similarly, such in? exible structures are prone to the phenomenon of ââ¬Ëgroupthinkââ¬â¢ (Janis, 1982). Here the authority commanded by management allows them to not only enforce their unrealistic views upon surrounding agents, but to ensure agents are brought in beneath them who are like-minded, ensuring that the issue is systemic. The second trend identi? ed is system properties. In such cases, as the complexity of technology systems becomes increasingly developed, there is the possibility of disasters occurring due to unforeseen interactions. This theory was developed by Perrow (1984) in response to the nuclear accident at Three Mile Island and their elimination can only be achieved through system reengineering (Turner and Toft (1988)). Such actions should be focused on moving away from tightly coupled, interdependent elements, as evidenced in serious E-Coli outbreaks caused by the spread of infected meat through the standard food distribution network (Pidgeon and Oââ¬â¢Leary (2000)). Support in the literature which calls for the broad scope of socio-technical theory is notable. According to two papers cited by Turner (1994), 70-80% of all disasters are precipitated by a social ââ¬â administrative and managerial ââ¬â fault, with subsequent public enquiries proposing a similar percentage of changes in such social contexts (Drogaris, 1991; Turner and Toft, 1988). Similarly, a study by Blockley (1980) found that in the case of 84 technical failures, greater attention had to be paid to both political and organisational conditions which were deemed to foster human error (alluding to the aforementioned institutional norms). He found that technical failures were underpinned by managerial and administrative frailties such as engineers who were fully aware of technical issues but failed to report them. Also, though such complex systems are inevitably unique, the previously outlined pattern of failure is one which is found time and again in disasters around the world. To illustrate this pattern, I have constructed a chain of causality which appropriately contextualises Turnerââ¬â¢s (1994) model using the Bhopal chemical leak (? ure 2). (? gure 2) By employing a socio-technical systems view, dangerous gaps were uncovered in the safety culture and environmental awareness not only at the plant, but throughout increasingly industrialised, developing economies (Broughton, 2005). Such problems are not isolated to developing nations however; another disaster which was scrutinised heavily occurred in January of 1986, when the space-shuttle Challenger exploded 73 seconds after take-off. Similarly to the Bhopal chemical incident, concerns were expressed prior to the disaster however a fundamentally different method of ad-hoc analysis was used in the Presidential investigation that followed. Traditionally, accidents had been viewed in a linear, two dimensional fashion where each failure is linked in a sequential chain (Qureshi, 2008). One such method of assessment is root cause analysis (RCA), which seeks to identify the primary element in the chain and was chosen by the Challenger investigative body. Such an approach seems a reasonable and logical response when it is inherent in human nature to search for ââ¬Å"simple technical solutions as a panaceaââ¬â¢ (Elliott Smith, 1993, p. 226). In most cases, stakeholders primarily look for a simple explanation to help them understand the issue, often to assess liability. According to the RCA literature, once this root cause has correctly been identi? ed, through corrective action the issue should be resolved and will consequently not occur again. In light of the events of the Challenger disaster, both machine and human loss including a civilian, the public and government need to identify a ââ¬Ëroot causeââ¬â¢ seems understandable, particularly when the circumstances of the disaster point to a technical fault. The investigation concluded, relying heavily on the video-evidence available from over 200 cameras, that a failure in the o-ring seal in the right solid-rocket-booster caused the explosion, having been rendered ineffective due to the cold weather. Through my own investigation and that of others however, such ? dings seem to grossly oversimplify the events that lead up to the disaster. Roger Boisjoly, an engineer employed at booster manufacturer Morton Thiokol already knew of the technical shortcomings, in fact expressing concerns speci? cally of the shuttle exploding (Seconds From Disaster, 2007). He made two efforts to make senior of? cials aware prior to launch: initially writing a memo to Thiokol managers six mo nths in advance, ? nally arranging a teleconference the night before the ill-fated voyage in an increasingly desperate attempt. The outcome of the conference had little effect in the wake of heavy resistance from NASA representatives such as Lawrence Mulloy, head of rocket booster technology. Incredibly, this resistance came even in the face of photographic evidence of the o-ring cold weather liability, from recovered boosters from a Discovery launch in similar weather a year earlier. In an independent investigation author James Chiles uncovered further issues which could have been readily identi? ed with a more fastidious safety culture (Seconds From Disaster, 2007). Having made it through the initial launch phase, Chiles postulates that the o-ring rupture would not in-fact have proved fatal as the fuelââ¬â¢s aluminium additive resealed the leak after launch. It was only when Challenger passed through a violent jet-stream (winds of over 300 kilometers per hour) that this makeshift seal was re-opened, causing the explosion. The existence of this jet-stream could easily have been identi? ed as a commercial airliner, passing through the launch area half an hour earlier had already reported it and it subsequently showed up on Challengers ? nal telemetry. It was overlooked however when NASA weather balloons, which had in-fact drifted away from the launch area, reported nominal conditions. Attributing the disaster to the o-ring failure alone is to apply a narrow-minded view to the issue and might have ultimately done more harm than good. In the subsequent investigation into the Columbia disaster years later, the investigation found NASA liable by not having reconciled the organisational safety philosophy in the wake of Challenger. Indeed, Morton Thiokol went on to supply the new shuttle boosters and Lawrence Mulloy was made head of all propulsion systems at NASA. Following the Columbia reentry disaster years later it was stated, ââ¬Å"the foam debris hit was not the single cause of the Columbia accident, just as the failure of the joint seal that permitted O-ring erosion was not the single cause of Challenger. Both Columbia and Challenger were lost also because of the failure of NASA? s organisational systemâ⬠(Columbia Accident Investigation Board, 2003, p. 195). Similarly telling was an independent investigation conducted by Nobel Prize winning physicist Richard Feynman. His observations of the Challenger disaster at no point mention the o-ring but instead focused on the failings in NASA management (Lentz, 1996). Feynmanââ¬â¢s ? ndings were relegated to the appendix of the investigation report. Lessons were not learned, and it is my belief that the elementary nature of the ? ndings, emphasising a technical fault over obvious gross mis-management were at least partially to blame for what followed. The weaknesses of RCA become clear then, particularly in light of industry developments since the 20th century with system complexity becoming commonplace. Importantly, Hollnagel (2004) notes that particular industries that are problematic for such analysis are aviation, aerospace, telecommunications, power production (nuclear particularly) and healthcare. I believe what makes these distinct, is not just their highly technological systems, but the high degree of human autonomy found in these industries. RCA is also undone in the context of complex systems due to its sequential view of events as following one another, ensuring it canââ¬â¢t tackle bidirectional causality (e. g. competitor response to marketing initiatives (Okes, 2009)). Looking at the Bhopal and NASA examples, itââ¬â¢s clear to see that such systems require an epidemiological approach and perspective, as whilst the factors and preconditions can be viewed chronologically, importantly itââ¬â¢s not because of such a chronological manner the disaster occurred. Whilst we could conduct multiple RCAââ¬â¢s to identify multiple root elements, there is little consensus on how to aggregate such ? ndings and implement appropriate solutions. This is of particular issue in the healthcare industry which relies heavily upon RCA (Root cause analysis, 2012). It is my belief this over-reliance on a limited tool is due to the common need to place liability in the wake of healthcare accidents, to assess potential legal action and remuneration. Further issues are itââ¬â¢s inability to effectively tackle human errors such as sloppy management, and the tools limitation to its users bounded rationality and knowledge: RCA is not able to expand its users understanding beyond incumbent mental faculties. The alignment of disasters explored with Turnerââ¬â¢s (1994) model are evident. In the Challenger case particularly, we see how it was the interaction of mis-management (not utilising critical information), high wind conditions as-well as the technical o-ring de? ciencies which combined to cause disaster. According to Chiles (Seconds from Disaster, 2007), if you were to remove the high-altitude jet-stream, the aluminium seal may not have shaken loose and the shuttle may have continued unharmed. This supports our earlier axiom of pathogen interaction as necessary for disaster. Following the events at Hillsborough, the Archbishop of York Dr. John Habgood, speaking at the memorial said ââ¬Å"events of the magnitude of Hillsborough donââ¬â¢t usually happen just for one single reason, nor is it usually possible to pin the blame on one single scapegoat Disasters happen because a whole series of mistakes, misjudgments and mischances happen to come together in a deadly combinationâ⬠(Taylor, 1989, p. 20). Indeed, public inquiries following con? rmed as much. The examples explored perfectly illustrate both the broad nature of preconditions ââ¬â technical and human ââ¬â in provenance, as well as the existence of an incubation period. As such, they lend credibility to socio-technical theory as a descriptive tool. At the same time I believe its strengths go beyond post-hoc analysis, being ideal for usage as a prescriptive form of risk assessment and systems design. Recent developments in literature have focused in this area by learning from highly reliable organisations (HROââ¬â¢s). Remedial measures include incident learning systems (illustrated in ? gure 3 overleaf), the promotion of organisational learning and reevaluation of institution culture in order to realign and rede? e the norms (Carnes, 2011; Reiman, 2007), as done in the US Nuclear Power Industry. Pidgeon (2012) similarly promotes organisational learning alongside senior management commitment to safety, shared care and concern for hazards and a recognition of their impact, as-well as realistic and ? exible norms and rules about hazards. Empirical evaluation by Rognin, Salembier and Zouinar (1998) utilising a complex systems approach have also explor ed the aviation industry, recognizing their use of mutual awareness, mutual monitoring and communication as preventative tools toward a Pathogenic environment. These prescriptions are all highly congruent with one another, adding weight to their applicability for research based management. (? gure 3. Cooke and Rohleder (2006)) This having been said, RCA does have its place amongst certain post-hoc analysis. Many different forms exist and have been employed with great effectiveness in non complex systems which are isolated from a human factor. Two distinct examples are Toyotaââ¬â¢s development of the iterative ââ¬Ë? ve-whysââ¬â¢ method (Bodek, 1988) in tandem with Ishikawa diagrams (1968). This example is a staple tool in the Toyota production process and required learning during employee induction. An example of itââ¬â¢s usage is illustrated in ? gure 4. However, I postulate Toyotaââ¬â¢s success with RCA is due to the automated nature of system design in their production process and the lack of human autonomy found in their plants, as opposed to alluding to any great power in the framework. (? gure 4. Ohno (2006)) To summarize, assessing the two models and addressing them as mutually exclusive concepts, RCA clearly has merits when applied to purely technical systems and failings. Whilst there is some literature which maintains the two are not such exclusive methodologies, this has been beyond the scope of my analysis. Once we begin to look at increasingly complex systems, RCA clearly becomes unsuitable. Socio-technical theory in contrast has considerably greater scope as both a descriptive and prescriptive tool. As such, it is my ? rm belief that in a world of increasing complexity, with ? rms heavily investing in the design of more complex systems facilitated by greater computing capabilities, root-cause analysis is at best a foundation tool. Often it is likely to be insuf? cient to identify an appropriate cause and even when it can, further investigation should be employed using a systems view which accommodates complexity in order to rescribe suitable changes. As such, in the coming weeks, I would expect the investigation into the Texas fertilizer plant explosion to go considerably beyond RCA in its analysis, though using it initially to satisfy the public need for a quick answer in the wake of such human loss.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.