[originally posted in June, 2009]
Scores of prophets have predicted the end of the world or large-scale destruction. According to the Skeptic’s Dictionary, Jehovah’s Witnesses “have been wrong so many times that they’ve quit making specific predictions, but they’re still warning us that the end is near.” Among others who missed the mark were Jeanne Dixon, and John Gribbin and Stephen Plagemann. Obviously, the most important thing to remember when thinking critically about this is that
EVERY SINGLE DOOMSDAY PROPHECY–WHOSE PREDICTED DOOMSDAY HAS PASSED–HAS BEEN WRONG
Well, we survived the Decmeber 21, 2012 global catastrophe (or, I’m assuming we will since I’m writing this with just over 4 hours to go on 12-21-12). I recall a student in a nontraditional spring, 1999 college class where his final exam was to occur in January of 2000. During the first few days of class he mentioned there was no need to study for the final exam since Y2K was going to wreak such havoc on the world that going to college would be rendered meaningless. I tried to talk some sense into him and on the day before the last class I asked him once and for all, “are you going to study for the final exam?” He nervously replied, “uhh, I think I should.” Do you think he made the right decision?
THE EFFECT OF TECHNOLOGY ON CLASSROOM LEARNING AND ATTENTION: WHAT ROLE SHOULD IT TAKE IN THE CLASSROOM?
Invited Post by Lisa Lawter, Ph.D. and Marshall Andrew Glenn, Ph.D.
Technology has invaded the American classroom competing evermore aggressively against traditional pedagogical practices largely tethered to the time-limited artifact of a rural-agrarian school year in an effort to increase efficiency of learning outcomes. Furthermore, these expectations are now ensconced in “core curricula” standards. Educators are faced with the perennial challenge of how to provide a depth, breadth and appreciation of subject matter that makes for an educated and informed public.
While many educators extol the benefits that technology plays in teaching, it also has its “distractors” to the point that it is getting some bad press about its deleterious effects on the attention system to the point that some teachers complain “depth and breadth” of learning are being compromised.
In order to better understand the construct of attention, it is instructive to draw our attention to the neuropsychological components of attention and their underlying brain structures likely associated with each. According to Mirsky and Duncan (2001), attention is a multifaceted system implicating several brain structures for specialization. The encode element, with supporting brain structures of the amygdala and hippocampus, mediates the brain’s capacity to hold onto information briefly and perform some mental operation on it, i.e., working memory. The focus/execute element, implicating the inferior parietal lobule, superior temporal gyrus and some structures of the corpus striatum, allows for focusing on a stimuli in the mist of distracting stimuli while executing a quick response. The shift element, with supporting brain structures of the dorsolateral prefrontal cortex and anterior cingulate gyrus, allows for shifting attention from one stimulus to another. The sustain element, with supporting structures of the brainstem, reticular activating system (RAS) and the midline of the thalamus, mediates sustained focus, or vigilance. Finally, the stabilize element, supported by the other attention systems but exact structures unknown, mediates the consistency of responding to a “target” stimulus. It is clear that our attentional system is complex, multifaceted and influenced by both biological and environmental factors.
Of course it is important to keep in mind that the biological governance of our attention system is influenced, i.e., sculpted by a rapidly changing post-modern world, the effects which have not gone unnoticed. For example, Dimitri Christakis et al. ( 2004) studied the TV viewing time of 1,300 children, ages 1 to 3 years, as rated their mothers on a behavior rating scale and later evaluated their attention and behaviors at age 7. Mothers who rated their children as frequent TV viewers tended to score in the highest 10% for problems in attention, concentration, impulsiveness and behavioral control. Moreover, for every additional hour of TV viewing, the chances of experiencing attention problems increased by 10%. This preliminary study suggested an associated between early TV exposure and attention problems.
Computers were once a huge machine in the basement and now they are in our pockets. School districts are stretching budgets thin to fill their schools with the latest technology and super software. The average classroom has at least one desk top computer, a class set of laptops, iPads are appearing, not to mention that it is standard to have a SmartBoard in classes including Pre-Kindergarten. New technology is hitting the market daily. The equipment is simple to learn and the possibilities are limitless. Answers to all questions are just a click away. This all sounds good but what are teacher’s thoughts about educational technology, what do parents think about technology? And most important, what do students think about technology in the classroom?
Teachers say: According to The New York Times articles entitled “Technology Changing How Students Learn, Teachers Say” teachers are reporting that students have spent more time with screens than they spend in school. In this article teachers were concerned with a decline in student’s abilities to analyze and think deeply about a topic. Shorten attention span of students was also listed as an issue teacher feel is caused by digital technologies. In this article, Dr. Christakis remarked, that the overreliance of technology “makes reality by comparison uninteresting.” Many times schools use computer programs to assist students who need remediation. Teachers are saying what these students really need is feedback from an actual teacher and quality instruction not a bell that goes off when you get the answer correct on the screen. Teachers do credit technology with improving research skills. At this juncture, results are equivocal and more research is needed. But one cannot help but sympathize with teachers frustrated with ever challenging expectations that must be accomplished in a medial-filled, sound-bite, You-Tube media of attention-challenged students.
Parents say: Parents are worried about safety. Internet has the potential to expose children and youth to inappropriate information. One click of the mouse and children are on websites that have questionable content and may be unsuitable for their age. Some parents grew up before the age of technology so they must spend time learning how to use it. Often children are more technologically savvy than their parents. Some parents of children with disabilities embrace technology. Assistive technology devices have made it possible for their child to communicate, to participate in school and in the community. This would not have been possible before technology.
The U.S. State Department of Education is concerned about how schools are using technology. The LEAD Commission, a public-private commission created by U.S. Secretary of Education Arne Duncan is charged with the task of crafting a blueprint for better use of educational technology. Teachers and parents are being surveyed. It seems the crucial component of technology in classrooms is the students. What do students think about the way technology is used in classrooms? The field of education needs to hear from students.
1. What do students think is the best use of technology in schools?
2. What do students think the roles of computers should be in the classroom?
3. What do students think about computers being used as tutors?
4. Do students want more time with the teacher or is the computer instruction enough?
5. What is a good use of the internet in classrooms?
Christakis, D., Zimmerman, F.J., DiGiuseppe, D.L., McCarty, C.A., Early television exposure and subsequent attentional problems in children. Pediatrics Vol. 113 No. 4 April 1, 2004 Retrieved from: http://pediatrics.aappublications.org/content/113/4/708.full
Mirsky, A.F. and Duncan, C.C. (2001), A nosology of disorders of dttention. Annals of the New York Academy of Sciences, 931: 17-32. doi: 101111/j.1749-6632.2001.tb05771.x
Richtel, M.(2012, November 1). Technology changing how students learn, teachers say. The New York Times
Marshall Andrew Glenn, Ph.D. is an Assistant Professor in the Applied Behavioral Studies program at Oklahoma City University. He holds a Diplomate from the American Board of School Neuropsychology and also serves as an Examiner.
Lisa Lawter, Ph.D. is an Assistant Professor in the Department of Education at Oklahoma City University. Her focus area is special education.
Invited Post by Mark Griffin, Ph.D.
[Dr. Griffin's Personal Blog: markgriffinessays.wordpress.com]
A curious paradox greets the traveler to the American Southwest and “Heartland” –the paradox of a place that is growing more diverse yet less hospitable to outsiders. On the one hand, it’s a place where pick-up trucks are as apt to broadcast the polka-inflected “norteño” ballads of rural Mexico as the familiar twang of Alan Jackson. In small town where it would have been unthinkable a generation ago, signs welcome the business of immigrant customers with the words Se habla español, and Mexican restaurants that would’ve seemed like surreal apparitions now spice up the semi-abandoned downtown areas.
On the other hand, though, the last few years have seen a spate of harsh laws against illegal immigration (across the Deep South and Southwest, from Alabama to Arizona) that seem designed to make immigrants as a whole feel unwelcome and under siege. For, like it or not, these laws are blunt weapons that fall on the heads of legal residents no less than illegal ones –breaking up extended families an making them feel like second-class citizens. Politicians, who are now sweeping elections across the region, seem to have forgotten the much sunnier disposition of Ronald Reagan: he of the avuncular, welcoming disposition; he who’d seemed so freshly-minted from a Norman Rockwell painting.
Uncle Sam wears a scowl these days, but it’s not just a rural, red-state scowl. In the halls of academe (in Harvard, no less), Samuel Huntington has painted a dire picture of a “clash of civilizations” between East and West, North and South. Huntington would have us all believe that the Spanish-speaking workers toiling on farms and staffing its restaurants are harbingers of an alien “civilization” that, absent great vigilance, will undermine America’s “Anglo-Protestant” one. The Berlin Wall has fallen, and now we’re called to reconstruct it on our southern border. “While Muslims pose the immediate problem to Europe,” he states, “Mexicans pose the problem for the United States.” He proceeds to warn readers that “the results of American military expansion in the nineteenth century could be threatened and possibly reversed by Mexican demographic expansion in the twenty-first century.”
I maintain that this “clash of civilizations” description of our southern border is wrong on more than one level. Our Latin American neighbors, in spite of vast economic inequities that divide us, are not members of some “alien” non-Western civilization. As former colonies of Spain and Portugal, the culture of that region is as Western as ours -albeit a more Mediterranean, predominately-Catholic variant. But more to the point: even if Latin American culture were an “alien” civilization, this would be no justification for its liquidation in the American melting pot. For, like most nations in the world, the US is part of a global yin-and-yang of civilizations, not some “pure” and fortified monolith. In this essay I will examine the work of two important writers (Gloria Anzaldúa from the US/Mexico border region and Miguel de Unamuno from Spain) with a view to sketching an alternative picture to that of clashing, fenced-in monoliths : one in which the lines between rival cultures are blurred and one exists within the heart of the other.
Gloria Anzaldua: Mestizaje, Aztlán and Societal Culture
“The US-Mexican border,” writes Gloria Anzaldua, “es una herida abierta where the Third World grates against the first and bleeds. And before a scab forms it hemorrhages again, the lifeblood of two worlds merging to form a third country –a border culture.” (25) One might even read her entire work as the blurring and deconstruction of the boundaries erected by the likes of Huntington. And to do this she wields two strategies: the notion of mestizaje (racial and cultural hybridization), and an appeal to the most ancient historical sources.
Anzaldua devotes an entire essay of her book Borderlands/La Frontera (1987) to the concept of mestizaje and a call for a “new mestiza consciousness.” (99) She cites José Vasconcelos, the first major theorist and advocate of the concept in his 1925 work La raza cosmica, but clearly has a much more radical concept of the notion than he did. Where Vasconcelos celebrated and foresaw a racial/cultural synthesis that would produce a uniform Mexican (and perhaps even “cosmic”) consciousness, Anzaldúa writes about the embrace of difference and unresolved contradiction –both at the national and personal levels. Her mestiza is not just an inhabitant of the blurred boundaries of the literal borderlands (“where the Third World grates against the first and bleeds”) but also one who is able to live with and transcend all dualism and contradiction:
The new mestiza copes by developing a tolerance for contradictions, a tolerance for ambiguity.
She learns to be an Indian in Mexican culture, to be a Mexican from an Anglo point of view…Not
only does she sustain contradictions, she turns the ambivalence into something else. (101)
One path to this transcendence (one which the author herself describes undergoing) is a shamanistic embrace of the Coatlicue archetype, an Aztec figure that encompasses opposing forces: “Ella es el monstruo que se tragó todos los seres vivientes y los astros, es el monstruo que se traga el sol cada tarde y le da luz cada mañana.” (68)
In short, there is a figurative borderland which resides within all who embrace and transcend contradiction. It can exist anywhere and everywhere, even in the heart of a geographic culture or civilization.
In addition to blurring boundaries, Anzaldúa also turns our attention to the most ancient historical sources –in order to claim a space for minorities within the U.S. The pattern of north-south migration across what is now the US/Mexico border is, she reminds us, part of a historic pattern that goes back to ancient, pre-Aztec times. The ancestral homeland of the Aztecs was the mythical Aztlan, which is the modern US Southwest:
In 1000 B.C., descendants of the original Cochise people migrated into what is now Mexico and
Central America and became the direct ancestors of many of the Mexican people…The Aztecs
(the Nahuatl word for the people of Aztlán) left the southwest in 1168 A.D. (26)
Anzaldúa’s reference to these ancient historical migrations is not a rationale for the re-annexation of the American Southwest by Mexico –as some demagogues might fear. It is a simple reminder that the Native American and mixed-race peoples have a presence at the historical and geographic heart of what is now the U.S. Southwest. A yin at the center of the yang –or vice-versa.. It is a rationale for inclusion of the other, even when the other is not part of “Western” civilization.
We must grant, of course, that there are legitimate anxieties surrounding the question of national unity, and I am no advocate of national fragmentation, or “balkanization.” All nations benefit from a common public language (English in our case, Spanish in the case of Mexico and Spain) and some “societal culture” which links everyone together. But I don’t think that common language should be exclusive –lest we remain more monolingual than everyone else in the world; whatever unifying culture we have as Americans should be much “thinner” than the term “Anglo-Protestant” or “Catholic’ would suggest. And we’d do best not to circumscribe our societal culture within Western civilization –lest we suggest that anyone who not “Western” (or Christian) is not a full-fledged American.
To be a citizen of the US is to belong to a societal culture, and not to belong to a particular ethnic, religious group, or even to a civilization. As political scientist Will Kymlicka defines it, it is the culture associated with citizenship, with a national group it is the culture that flows from the particular set of rights, duties and historical consciousness that citizenship confers. It’s to share the symbols and rituals grounded in that narrative: July-4th fireworks and parades, the images of Lincoln and Washington on our coins, the remembrance of veterans and slain civil rights leaders. It’s also: filing taxes before April 15, voting in November (or choosing not to and still being bombarded by campaign ads), and moving around on interstate highways.
To the charge that such a notion of American culture is thin and insubstantial, we must cite the advantages of such restraint and remind ourselves of the alternative. It’s this minimal consensus that unifies us without requiring us to impose some “pure” cultural norm on minorities of various stripes. And far from “dumbing down” our culture, it has after all, made room for the peaceful flourishing of a whole range of American subcultures: regional, ethnic and religious. It has ample room for Gloria Anzaldúa’s notion of mestizaje. And it has helped us to avoid the vicious cycle of repression and balkanization that has plagued other countries, from Spain to the Balkans themselves. Where societal cultures are concerned, it is better to thinks broad and shallow, rather than narrow and deep.
Miguel de Unamuno: Spain, Ancient Sources and the “Other”
As a Hispanist, I’ve found the example of Spain to be instructive and of special interest –one that parallels ours in interesting ways. In addition to its turbulent past, Spain can be said to inhabit one of the major “fault lines” in our modern world: the border between Europe and predominately-Islamic North Africa. It is a nation of glories and defeats on a grand scale, of dramatic highs and lows.
Miguel de Unamuno, writing in “crisis” that was Spain’s loss of power and empire around 1898 (one of its dramatic lows) suggested that his country’s rich popular culture would be more vital to the extent that it was more open and porous, less politicized, and disengaged from official sponsorship. He argued in his work En torno al casticismo that the heart of Spanish tradition was fluid and dynamic, irreducible to the doings of their generals and politicians. Trying to head off the vicious cycles of repression and balkanization that he saw coming, he was setting forth in these essays a set of radical proposals (at least for his time): church-state separation, the canonization of Don Quixote as a national icon, and the re-claiming of the nation’s authentic popular/regional cultures.
It is interesting that these were not the words of jacobin trying to rid the modern world of religion, but of a passionate (if heterodox) Catholic and Spanish patriot –whose aim was cultural revitalization. He was putting to rest the notion that a more broad and minimal societal culture would lead to general cultural decline. On the contrary, he was suggesting that “thin” our nations’ societal cultures are, the more vital their popular cultures will be.
Like Don Quixote before him, Unamuno was up against a particular windmill: the idea popular in his time and, alas, still popular in ours, that a nation can and should be “pure” and uncontaminated by foreign presences or civilizations. But he knew that Spanish culture itself had never been pure and, if it had been, its grandeurs would have been impossible. The grandeur of the Alhambra and the Moorish grandeur of Seville and Cordoba, and even the European Renaissance itself, would have been unimaginable without Spain’s medieval Jewish and Islamic presences. For this reason he would later write, in a poem celebrating Córdoba, that “Rome chants through the mosque.” In places like Seville and Cordoba, the heritage of the Roman Empire and the Roman Catholic Church fuse with Spain’s Islamic heritage: Roma canta en la mezquita .
A common intuition, then, links the work of Gloria Anzaldúa and Miguel de Unamuno: the idea of returning to ancient sources in order to make a place for the “other.” An Islamic dot in the heart of Spain and its Catholic Empire, at the same time that there is an ancient Christian (Coptic) cathedral in the heart of Islamic Cairo. An ancient Native American heritage survives in the heart of the predominately-Protestant-and-Catholic nations of the Americas. Both writers blur boundaries and point to the existence of a Tao of civilizations, not their inevitable “clash.” From both we can infer that a minimal conception of a country’s societal culture (one that makes room for this yin-and-yang of civilizations) will make that nation not just more free and equal, but also more vital.
Glance at a newspaper. Listen to the radio. Watch TV. What you see is people creating news, stories, and yes, history. How accurate is it? When does something become history? Does a fact, an event, a person become history – or do we create history by creating meaning and significance for that fact, that event, that person?
Actually, in the modern field of history, we argue the latter. Not all facts become history, although they may very well be in the past. Not all past events or people in the past become history – and therein lies the point. History is a creation of people, in all their messy glory. What differentiates rumor, innuendo, lies, spin and rhetoric from history are the methods used: and that is what historians are supposed to do. We’re supposed to apply our training and skills to gather, analyze, synthesize and interpret evidence in specific historically-appropriate ways. And the evidence has to be examined carefully before being incorporated into the analysis, so source credibility is essential. Evidence has to be respected: just because we don’t like a particular fact, event or person doesn’t mean we get to ignore it, or exclude it from our work. Being human, we have to recognize that we are going to prefer one thing (event, fact, person, period) more than another. Being professional, we have to acknowledge the impact that those preferences may well bias our work and consciously take steps to mitigate any bias. We follow certain methodologies, document our process and sources, argue our point with colleagues so that our own search for meaning is itself credible.
And that is what professional historians do. Many of us get into the field because we’re fascinated by the stories of the past: the life of a medieval miller, the arc of change brought about by a particular process, whatever. Personally, I love what I discovered once into the field of history: research and teaching. My research takes place in a government archive in Paris, France. Hours spent in those archives, reading other people’s letters, gives me intellectual stimulation and historical insights I just don’t get anywhere else.
Teaching is a whole different rush. I teach World History, Ancient Egypt & Greece (and others), European history and various isms (nationalism, imperialism, decolonization, etc.). Some stories, but mostly I help students learn to use the tools of the professional historian: analysis, synthesis and interpretation of sources and communication of that work. Students are challenged to be apprentice historians, and most love it. Most realize that history isn’t about dead men, wars, kings or treaties – that it is indeed, a creative process.
History is not just what happened in the past, it is the interpretations created by generations of historians who seek out evidence and impute importance to that evidence in relation with other evidence, interpretations and contexts. All kinds of uses are made of the evidence by non-professionals in service of other goals, such as those by politicians, leaders of business and cultural groups, religious leaders and others. Some of those people may well be ‘professionally trained’ historians, but all too frequently they use and abuse of history rather than do it. They are not acting as professional historians, they are acting in the other capacity. Recognizing the abuse of history is tricky, but necessary. Recognizing the creative aspect of historical study is vital, challenging practitioners to constantly assess their own preconceptions and limitations and how those liberate and limit their analyses and interpretations.
Invited Post by Dennis Jowaisas, Ph.D.
One way to solve this problem is to decide that evil behavior results from possession of a person by evil “spirits”, whatever that may mean in a particular culture’s belief system. Another way is to decide that the people who behave in an evil way, always culturally contextualized, are evil, or bad, in and of themselves. “They should know better.” Therefore, they are responsible. But even the possession theory of evil may hold the person responsible: the possession results from some moral flaw or overt or covert misbehavior. Holding the person responsible makes it easy to deal with them: we punish evil. Exactly how we punish depends upon the culture.
At the outset I want to acknowledge that modern psychology and neuroscience have shown that some folks are possessed! They possess a nervous system that is dramatically different in function than the nervous system of the average citizen. Some persons with a history of violence and difficulty in complying with basic social and legal rules have damage to parts of the brain that govern or influence exactly those kinds of behavior. We know enough about brain behavior interactions to be sure about this. Brain disorders that involve the limbic system, one of the primary systems regulating emotion and memory, reliably result in poorly regulated social behavior and violent behavior. This is particularly true of damage to the septal, hippocampal, and amygdala areas of the limbic system. Damage to or malfunctioning of prefrontal areas of the cortex, areas that exert executive control over emotional expression and decision-making, result in persons who make consistently poor judgments about how to behave in various situations. They also exhibit hasty or impulsive decisions that seem to ignore obvious outcomes or consequences. The orbitofrontal cortex seems particularly important in this behavior. All of these limbic and prefrontal brain parts are interconnected and emotional expression requires an intact and “normally” active brain system.
Some studies show that persons diagnosed as having antisocial personality disorder (APD) because of their cruel conduct and/or criminal behavior have relatively unresponsive limbic systems and orbitofrontal cortex. Consequently they are less “fearful” of social consequences for aberrant behavior. In essence, they are difficult to “socialize” through the typical mechanisms of punishment, guilt and shame. To influence behavior in these ways requires a conventionally reactive limbic, hypothalamic and prefrontal cortex arousal system.
In the previous syndrome of APD we see evidence of the “bad chemicals” that bedeviled Dwayne Hoover, Vonnegut’s protagonist in “Breakfast of Champions”. The brain depends on a very delicate balance of a dozen or more neurotransmitter chemicals for proper functioning. We know now that most of the so-called psychoses of 40 years ago are disorders of neurotransmitter balance and the prevalent treatments are drugs that selectively increase or decrease the concentrations of these chemicals in the various subsystems of the brain.
So, in this limited, biological sense we could say that evil resides within the person. But I don’t think that is what most folks mean when they claim the inner person as the source of evil. I believe those same folks will not be comfortable with the next consideration: evil lies outside the person, not in the form of some malevolent devilish source, but in the environment’s impact upon the person. In psychology we traditionally label this a behaviorist thesis but over the last three decades many social psychologists have focused on societal and cultural variables in order to understand why we act as we do.
Our first hint that the environment can have such powerful effects on our behavior comes from some old studies of honesty. Basically what we found was that most students will not cheat on an task unless the outcome is important and the chances of being caught are slim. When those two variables were manipulated so that the outcome really mattered and there was little chance of being discovered cheating, almost everyone cheated. OK, you say, but that is just that psychology research in an artificial setting, not the “real” world. And you would have a point, except ….
There are lots of other studies that show how easily people are influenced by circumstances. Two classic examples are the Solomon Ash line judgment studies and the Stanley Milgram obedience investigations. You may know about them and the details are readily available online and in introductory psychology texts. Here are the brief summaries. Ash showed that one third of all students would conform to erroneous judgments of the length of a line, even when the comparison line was twice as long as the target line. All it took was three students, in cahoots with the investigator, to make these wild judgments prior to the uninformed true participant. Now, seriously, how do we account for these results except by social influence, i.e., conformity. There is no way the target person could misperceive the length. In fact, some students claimed the lines “really were the same length” instead of accepting the explanation that they were conforming. How powerful is pressure to conform, eh? Remember, a third of students were so easily influenced they defied the evidence of their eyes, though not all rejected the conformity explanation when debriefed. Ashe also showed the conditions under which participants would resist conformity. Social situations dramatically affect our most basic judgments.
OK, I agree that the situation was rather artificial and the judgment wasn’t that important. But how about this study. A group of college students were asked to help train another group of students from a nearby college by collectively shocking them when they made mistakes on a task that was administered via computer. Of course, no actual shocks are delivered. In fact, there were no other actual students, it was all computer simulated, a sham controlled by the investigator. There are three experimental conditions for the trainers, all consisting of a comment about the arriving students they “overhear”.
Neutral: “The subjects from the other school are here”.
Humanized: “The subjects from the other school are here; they seem nice”.
Dehumanized: “The subjects from the other school are here; they seem like ‘animals’”.
As you probably anticipated by now, the most “shocks” in the subsequent “training” were delivered by the group hearing the dehumanizing statement, and the least by the humanized statement group. Remember, the students were randomly to groups so the only differences were the single, simple comment they overheard.
So, is evil situational? Are good people made to do bad things by circumstances? Does the social environment really control so much of our behavior? Perhaps the most active and, consequently, the most famous recent psychologist to actively promote this view is Phillip Zimbardo. His thesis is that most folks are at least morally average, generally doing the right thing most of the time. However, Zimbardo claims, decades of research shows that our behavior is very malleable. His argument is readily available in two places. The first is available at http://www.psichi.org/Pubs/Articles/Article_72.aspx, an article based on Zimbardo’s address to the national psychology honor society and originally titled “The Psychology of Evil: Seducing Good People Into Evil Deeds”.
The second source is his TED presentation, where his thesis is that the abuse by American soldiers at Abu Grabe was not the result of a few “bad apples” but caused by the “bad barrel”, the social environment arranged by the military command and CIA. (http://www.ted.com/talks/philip_zimbardo_on_the_psychology_of_evil.html )
In his TED talk, Zimbardo summarizes the evidence of classic experiments like Ashe, his own Stanford Prison study, and Stanley Milgram’s obedience to authority. I leave it to you to investigate this argument now that I have introduced you to the dilemma of the source of evil.
Selective Bibliography and Videos
Bandura, A., Underwood, B., & Fromson, M. E. (1975). Disinhibition of aggression through diffusion of responsibility and dehumanization of victims. Journal of Personality and Social Psychology, 9, 253-269.
Zimbardo, P. The psychology of evil. Eye on Psi Chi, Vol. 5, #1
Editors Note: Major source for this presentation
Zimbardo, P. G. (1970). The human choice: Individuation, reason, and order versus deindividuation, impulse, and chaos. In W. J. Arnold & D. Levine (Eds.), Nebraska Symposium on Motivation: Vol. 17 (pp. 237-307). Lincoln: University of Nebraska Press
One result of Zimbardo’s commitment to “doing good”: a positive psychology model of shyness.
details of the Prison exp. in slides
Zibardo’s tour of the former “prison”
Ashe’s experiment on conformity
NYT Science interview with Zimbardo
Zimbardo and Abu Grav
Podcast Part 2 of interview with PZ from Cardiff University
Podcast about Milgram Experiments on Obedience
Training soldiers to kill, but can they cope?
Watch closely as they shift–are there 12 or 13 men?
Dr. Richard Wiseman
On December 28, 2005, on CNN’s Larry King Live I had this exchange with alleged psychic Sylvia Browne after another guest asked Sylvia the whereabouts of Osama bin Laden: (from the CNN transcript):
KING: All right, Dr. Farha what’s your complaint? I know you took on Sylvia in an article but what’s your complaint about the overall concept here?
FARHA: Well, first of all, let’s put this in perspective here. Last year on the Montel Williams Show, Sylvia predicted that Osama bin Laden is dead. I don’t know if Sylvia still thinks that or not but I’d sure like to know.
BROWNE: Yes, I do.
FARHA: My whole take on this is — OK, very good, well we’ll find out sometime Sylvia. We’ll find out.
Well, Sylvia—we found out. Of course we now know that he was clearly alive at the time of her 2004 prediction and he lived until early May of 2011. Add this to her recent idotic failed prediction that aliens would visit Earth by 2010. Another in the growing list of failed predictions by Sylvia Browne….