Invited Post by John Nail, Ph.D.
One of the ongoing conflicts in our sharply divided nation is that between religion and science. Extremists on one side argue that science is attempting to supplant God and religion. Extremists on the other side argue that science ‘has proven that God doesn’t exist’. The battleground for this supposed science vs. religion conflict involves Darwin’s Theory of Natural Selection (Evolution). The problem isn’t that science and religion inherently conflict, the problem is that the antagonists fail to understand science and religion.
Science is knowledge of the natural – everything that behaves according to the ‘laws of nature’. We should note that the laws of nature are generalizations based upon observations of nature. Possibly the most famous law of nature is gravity – items fall when dropped. Note that the Law of Gravity is not a theory – it does not explain why items fall when they drop, only that they do. Einstein’s Theory of General Relativity was the first real theory (explanation) of gravity.
Religion deals with the supernatural, as in ‘outside the laws of nature’. Science cannot determine if God does or doesn’t exist as God wouldn’t be bound by the laws of nature. To illustrate the difference between the natural and the supernatural, we can answer the question ‘how many bacterial colonies are on the head of a pin’, but we cannot answer the question ‘how many angels can dance on the head of a pin’. We can scientifically study bacterial colonies; we can’t scientifically study angels. One obeys the laws of nature and the other doesn’t.
Pseudoscience are non-scientific beliefs that claim to be science. The most objective test for determining if something is science or not science is Karl Popper’s concept of falsification – every scientific theory makes predictions that can be tested. If the prediction is verified, the theory survives; if the prediction is not verified, if nature does something different from what the theory predicts, the theory must be modified or discarded. Note that the falsification concept does not mean that the scientific theory (explanation) is wrong – only that the theory is proven to be wrong if it makes wrong predictions.
As an example of the falsification concept, the theory of Evolution predicts that bacteria that survives exposure to an antibiotic will become resistant to that antibiotic. If, however, the bacteria became more susceptible (less resistant) to the antibiotic, this result would falsify (disprove) Evolution. As most of us know, bacteria do become more resistant (not less resistant) to antibiotics. The problem of antibiotic resistant bacteria is a serious issue in medicine.
Clearly, theories involving the supernatural do not make testable (verifiable) predictions. The inability to test supernatural explanations makes them not falsifiable and thus, not scientific.
Magisteria is a term for ‘teaching authority’. Almost two decades ago, Stephen Jay Gould argued that there is no inherent conflict between science and religion as science’s magisteria is the natural world, religion’s magisteria is the supernatural world and there is no overlap, and therefore, no conflict between the two. To put this in different terms, everything that follows the laws of nature can be studied by science; everything that does not follow the laws of nature cannot be studied by science. Science cannot study the supernatural and the supernatural has no place in science.
Creationism (even when it calls itself ‘scientific creationism’) is clearly not science as it 1) invokes a supernatural creator, 2) is not falsifiable as it does not make testable predictions, and 3) as per the writings of the creationism supporters, the act of creation cannot be studied. Clearly this is religion not science.
The theory of Natural Selection does not inherently conflict with religion unless one truly believes that the Earth is 6000 years old, as calculated by Bishop Ussher via a literal interpretation of the Old Testament. Note that this ‘young Earth creationism’ argument is an example of religion improperly getting out of its magisteria.
While the following is not a scientific statement, one can view Evolution (Natural Selection) as ‘the invisible hand of God working through nature’. As a scientist, I don’t find this objectionable, as long as it isn’t confused with science.
DR. NAIL is Chair of the Chemistry Department at Oklahoma City University
[originally posted in June, 2009]
Scores of prophets have predicted the end of the world or large-scale destruction. According to the Skeptic’s Dictionary, Jehovah’s Witnesses “have been wrong so many times that they’ve quit making specific predictions, but they’re still warning us that the end is near.” Among others who missed the mark were Jeanne Dixon, and John Gribbin and Stephen Plagemann. Obviously, the most important thing to remember when thinking critically about this is that
EVERY SINGLE DOOMSDAY PROPHECY–WHOSE PREDICTED DOOMSDAY HAS PASSED–HAS BEEN WRONG
Well, we survived the Decmeber 21, 2012 global catastrophe (or, I’m assuming we will since I’m writing this with just over 4 hours to go on 12-21-12). I recall a student in a nontraditional spring, 1999 college class where his final exam was to occur in January of 2000. During the first few days of class he mentioned there was no need to study for the final exam since Y2K was going to wreak such havoc on the world that going to college would be rendered meaningless. I tried to talk some sense into him and on the day before the last class I asked him once and for all, “are you going to study for the final exam?” He nervously replied, “uhh, I think I should.” Do you think he made the right decision?
THE EFFECT OF TECHNOLOGY ON CLASSROOM LEARNING AND ATTENTION: WHAT ROLE SHOULD IT TAKE IN THE CLASSROOM?
Invited Post by Lisa Lawter, Ph.D. and Marshall Andrew Glenn, Ph.D.
Technology has invaded the American classroom competing evermore aggressively against traditional pedagogical practices largely tethered to the time-limited artifact of a rural-agrarian school year in an effort to increase efficiency of learning outcomes. Furthermore, these expectations are now ensconced in “core curricula” standards. Educators are faced with the perennial challenge of how to provide a depth, breadth and appreciation of subject matter that makes for an educated and informed public.
While many educators extol the benefits that technology plays in teaching, it also has its “distractors” to the point that it is getting some bad press about its deleterious effects on the attention system to the point that some teachers complain “depth and breadth” of learning are being compromised.
In order to better understand the construct of attention, it is instructive to draw our attention to the neuropsychological components of attention and their underlying brain structures likely associated with each. According to Mirsky and Duncan (2001), attention is a multifaceted system implicating several brain structures for specialization. The encode element, with supporting brain structures of the amygdala and hippocampus, mediates the brain’s capacity to hold onto information briefly and perform some mental operation on it, i.e., working memory. The focus/execute element, implicating the inferior parietal lobule, superior temporal gyrus and some structures of the corpus striatum, allows for focusing on a stimuli in the mist of distracting stimuli while executing a quick response. The shift element, with supporting brain structures of the dorsolateral prefrontal cortex and anterior cingulate gyrus, allows for shifting attention from one stimulus to another. The sustain element, with supporting structures of the brainstem, reticular activating system (RAS) and the midline of the thalamus, mediates sustained focus, or vigilance. Finally, the stabilize element, supported by the other attention systems but exact structures unknown, mediates the consistency of responding to a “target” stimulus. It is clear that our attentional system is complex, multifaceted and influenced by both biological and environmental factors.
Of course it is important to keep in mind that the biological governance of our attention system is influenced, i.e., sculpted by a rapidly changing post-modern world, the effects which have not gone unnoticed. For example, Dimitri Christakis et al. ( 2004) studied the TV viewing time of 1,300 children, ages 1 to 3 years, as rated their mothers on a behavior rating scale and later evaluated their attention and behaviors at age 7. Mothers who rated their children as frequent TV viewers tended to score in the highest 10% for problems in attention, concentration, impulsiveness and behavioral control. Moreover, for every additional hour of TV viewing, the chances of experiencing attention problems increased by 10%. This preliminary study suggested an associated between early TV exposure and attention problems.
Computers were once a huge machine in the basement and now they are in our pockets. School districts are stretching budgets thin to fill their schools with the latest technology and super software. The average classroom has at least one desk top computer, a class set of laptops, iPads are appearing, not to mention that it is standard to have a SmartBoard in classes including Pre-Kindergarten. New technology is hitting the market daily. The equipment is simple to learn and the possibilities are limitless. Answers to all questions are just a click away. This all sounds good but what are teacher’s thoughts about educational technology, what do parents think about technology? And most important, what do students think about technology in the classroom?
Teachers say: According to The New York Times articles entitled “Technology Changing How Students Learn, Teachers Say” teachers are reporting that students have spent more time with screens than they spend in school. In this article teachers were concerned with a decline in student’s abilities to analyze and think deeply about a topic. Shorten attention span of students was also listed as an issue teacher feel is caused by digital technologies. In this article, Dr. Christakis remarked, that the overreliance of technology “makes reality by comparison uninteresting.” Many times schools use computer programs to assist students who need remediation. Teachers are saying what these students really need is feedback from an actual teacher and quality instruction not a bell that goes off when you get the answer correct on the screen. Teachers do credit technology with improving research skills. At this juncture, results are equivocal and more research is needed. But one cannot help but sympathize with teachers frustrated with ever challenging expectations that must be accomplished in a medial-filled, sound-bite, You-Tube media of attention-challenged students.
Parents say: Parents are worried about safety. Internet has the potential to expose children and youth to inappropriate information. One click of the mouse and children are on websites that have questionable content and may be unsuitable for their age. Some parents grew up before the age of technology so they must spend time learning how to use it. Often children are more technologically savvy than their parents. Some parents of children with disabilities embrace technology. Assistive technology devices have made it possible for their child to communicate, to participate in school and in the community. This would not have been possible before technology.
The U.S. State Department of Education is concerned about how schools are using technology. The LEAD Commission, a public-private commission created by U.S. Secretary of Education Arne Duncan is charged with the task of crafting a blueprint for better use of educational technology. Teachers and parents are being surveyed. It seems the crucial component of technology in classrooms is the students. What do students think about the way technology is used in classrooms? The field of education needs to hear from students.
1. What do students think is the best use of technology in schools?
2. What do students think the roles of computers should be in the classroom?
3. What do students think about computers being used as tutors?
4. Do students want more time with the teacher or is the computer instruction enough?
5. What is a good use of the internet in classrooms?
Christakis, D., Zimmerman, F.J., DiGiuseppe, D.L., McCarty, C.A., Early television exposure and subsequent attentional problems in children. Pediatrics Vol. 113 No. 4 April 1, 2004 Retrieved from: http://pediatrics.aappublications.org/content/113/4/708.full
Mirsky, A.F. and Duncan, C.C. (2001), A nosology of disorders of dttention. Annals of the New York Academy of Sciences, 931: 17-32. doi: 101111/j.1749-6632.2001.tb05771.x
Richtel, M.(2012, November 1). Technology changing how students learn, teachers say. The New York Times
Marshall Andrew Glenn, Ph.D. is an Assistant Professor in the Applied Behavioral Studies program at Oklahoma City University. He holds a Diplomate from the American Board of School Neuropsychology and also serves as an Examiner.
Lisa Lawter, Ph.D. is an Assistant Professor in the Department of Education at Oklahoma City University. Her focus area is special education.
Invited Post by Dennis Jowaisas, Ph.D.
One way to solve this problem is to decide that evil behavior results from possession of a person by evil “spirits”, whatever that may mean in a particular culture’s belief system. Another way is to decide that the people who behave in an evil way, always culturally contextualized, are evil, or bad, in and of themselves. “They should know better.” Therefore, they are responsible. But even the possession theory of evil may hold the person responsible: the possession results from some moral flaw or overt or covert misbehavior. Holding the person responsible makes it easy to deal with them: we punish evil. Exactly how we punish depends upon the culture.
At the outset I want to acknowledge that modern psychology and neuroscience have shown that some folks are possessed! They possess a nervous system that is dramatically different in function than the nervous system of the average citizen. Some persons with a history of violence and difficulty in complying with basic social and legal rules have damage to parts of the brain that govern or influence exactly those kinds of behavior. We know enough about brain behavior interactions to be sure about this. Brain disorders that involve the limbic system, one of the primary systems regulating emotion and memory, reliably result in poorly regulated social behavior and violent behavior. This is particularly true of damage to the septal, hippocampal, and amygdala areas of the limbic system. Damage to or malfunctioning of prefrontal areas of the cortex, areas that exert executive control over emotional expression and decision-making, result in persons who make consistently poor judgments about how to behave in various situations. They also exhibit hasty or impulsive decisions that seem to ignore obvious outcomes or consequences. The orbitofrontal cortex seems particularly important in this behavior. All of these limbic and prefrontal brain parts are interconnected and emotional expression requires an intact and “normally” active brain system.
Some studies show that persons diagnosed as having antisocial personality disorder (APD) because of their cruel conduct and/or criminal behavior have relatively unresponsive limbic systems and orbitofrontal cortex. Consequently they are less “fearful” of social consequences for aberrant behavior. In essence, they are difficult to “socialize” through the typical mechanisms of punishment, guilt and shame. To influence behavior in these ways requires a conventionally reactive limbic, hypothalamic and prefrontal cortex arousal system.
In the previous syndrome of APD we see evidence of the “bad chemicals” that bedeviled Dwayne Hoover, Vonnegut’s protagonist in “Breakfast of Champions”. The brain depends on a very delicate balance of a dozen or more neurotransmitter chemicals for proper functioning. We know now that most of the so-called psychoses of 40 years ago are disorders of neurotransmitter balance and the prevalent treatments are drugs that selectively increase or decrease the concentrations of these chemicals in the various subsystems of the brain.
So, in this limited, biological sense we could say that evil resides within the person. But I don’t think that is what most folks mean when they claim the inner person as the source of evil. I believe those same folks will not be comfortable with the next consideration: evil lies outside the person, not in the form of some malevolent devilish source, but in the environment’s impact upon the person. In psychology we traditionally label this a behaviorist thesis but over the last three decades many social psychologists have focused on societal and cultural variables in order to understand why we act as we do.
Our first hint that the environment can have such powerful effects on our behavior comes from some old studies of honesty. Basically what we found was that most students will not cheat on an task unless the outcome is important and the chances of being caught are slim. When those two variables were manipulated so that the outcome really mattered and there was little chance of being discovered cheating, almost everyone cheated. OK, you say, but that is just that psychology research in an artificial setting, not the “real” world. And you would have a point, except ….
There are lots of other studies that show how easily people are influenced by circumstances. Two classic examples are the Solomon Ash line judgment studies and the Stanley Milgram obedience investigations. You may know about them and the details are readily available online and in introductory psychology texts. Here are the brief summaries. Ash showed that one third of all students would conform to erroneous judgments of the length of a line, even when the comparison line was twice as long as the target line. All it took was three students, in cahoots with the investigator, to make these wild judgments prior to the uninformed true participant. Now, seriously, how do we account for these results except by social influence, i.e., conformity. There is no way the target person could misperceive the length. In fact, some students claimed the lines “really were the same length” instead of accepting the explanation that they were conforming. How powerful is pressure to conform, eh? Remember, a third of students were so easily influenced they defied the evidence of their eyes, though not all rejected the conformity explanation when debriefed. Ashe also showed the conditions under which participants would resist conformity. Social situations dramatically affect our most basic judgments.
OK, I agree that the situation was rather artificial and the judgment wasn’t that important. But how about this study. A group of college students were asked to help train another group of students from a nearby college by collectively shocking them when they made mistakes on a task that was administered via computer. Of course, no actual shocks are delivered. In fact, there were no other actual students, it was all computer simulated, a sham controlled by the investigator. There are three experimental conditions for the trainers, all consisting of a comment about the arriving students they “overhear”.
Neutral: “The subjects from the other school are here”.
Humanized: “The subjects from the other school are here; they seem nice”.
Dehumanized: “The subjects from the other school are here; they seem like ‘animals’”.
As you probably anticipated by now, the most “shocks” in the subsequent “training” were delivered by the group hearing the dehumanizing statement, and the least by the humanized statement group. Remember, the students were randomly to groups so the only differences were the single, simple comment they overheard.
So, is evil situational? Are good people made to do bad things by circumstances? Does the social environment really control so much of our behavior? Perhaps the most active and, consequently, the most famous recent psychologist to actively promote this view is Phillip Zimbardo. His thesis is that most folks are at least morally average, generally doing the right thing most of the time. However, Zimbardo claims, decades of research shows that our behavior is very malleable. His argument is readily available in two places. The first is available at http://www.psichi.org/Pubs/Articles/Article_72.aspx, an article based on Zimbardo’s address to the national psychology honor society and originally titled “The Psychology of Evil: Seducing Good People Into Evil Deeds”.
The second source is his TED presentation, where his thesis is that the abuse by American soldiers at Abu Grabe was not the result of a few “bad apples” but caused by the “bad barrel”, the social environment arranged by the military command and CIA. (http://www.ted.com/talks/philip_zimbardo_on_the_psychology_of_evil.html )
In his TED talk, Zimbardo summarizes the evidence of classic experiments like Ashe, his own Stanford Prison study, and Stanley Milgram’s obedience to authority. I leave it to you to investigate this argument now that I have introduced you to the dilemma of the source of evil.
Selective Bibliography and Videos
Bandura, A., Underwood, B., & Fromson, M. E. (1975). Disinhibition of aggression through diffusion of responsibility and dehumanization of victims. Journal of Personality and Social Psychology, 9, 253-269.
Zimbardo, P. The psychology of evil. Eye on Psi Chi, Vol. 5, #1
Editors Note: Major source for this presentation
Zimbardo, P. G. (1970). The human choice: Individuation, reason, and order versus deindividuation, impulse, and chaos. In W. J. Arnold & D. Levine (Eds.), Nebraska Symposium on Motivation: Vol. 17 (pp. 237-307). Lincoln: University of Nebraska Press
One result of Zimbardo’s commitment to “doing good”: a positive psychology model of shyness.
details of the Prison exp. in slides
Zibardo’s tour of the former “prison”
Ashe’s experiment on conformity
NYT Science interview with Zimbardo
Zimbardo and Abu Grav
Podcast Part 2 of interview with PZ from Cardiff University
Podcast about Milgram Experiments on Obedience
Training soldiers to kill, but can they cope?
Invited Post by John Nail, Ph.D.
We need to first start with some definitions, as people often confuse the following.
The Greenhouse Effect is a well-established physical effect. Technically, the Greenhouse Effect is the absorption (and eventually conversion to heat) of infrared (IR) radiation by an atmospheric gas. Nobody with any scientific credibility denies the Greenhouse Effect.
Global Warming refers to any long-term increase in average Earth temperature. When discussing any possible Global Warming, we must distinguish between weather and climate.
Weather refers to the conditions (air temperature, atmospheric humidity, atmospheric pressure, wind direction and speed) present at a particular time.
Climate refers to long-term general weather conditions. Climate refers to several years, if not decades. Low air temperatures in January and high air temperatures in August are examples of weather. Years of low (or high) rainfall per year are examples of climate. Global warming is an example of a possible climate (long-term) change; people often confuse it with weather (day to day) changes.
Climate Change is a broad category that incorporates Global Warming and any other possible changes, such as long-term changes in precipitation, the frequency of severe weather events (tornados, hurricanes, blizzards, etc.). Global Warming is a subset of Climate Change.
As mentioned above, we know that the Greenhouse Effect is a well-established scientific knowledge; it is a fact that a Greenhouse Effect operates on Earth. Earth’s average surface temperature is generally accepted to be somewhere close to 59o F (15o C). If there was not a Greenhouse Effect on Earth, it is believed that average Earth surface temperature would be somewhere close to 0o C (- 18o C).
The Enhanced Greenhouse Hypothesis:
Premise 1) Fossil fuels produce carbon dioxide (CO2) when they are used.
Premise 2) Carbon dioxide is a Greenhouse Gas (GHG).
Premise 3) Greenhouse Gases increase atmospheric temperature.
Conclusion: The atmospheric carbon dioxide from our use of fossil fuels would expected to be causing an increase in average global Earth temperature, also known as Global Warming.
The Enhanced Greenhouse Hypothesis is relatively simple and logical. The problem is that the reality is much more complex.
Question 1: Is carbon dioxide the only greenhouse gas?
Answer: No. Methane, nitrous oxide, and water vapor are other greenhouse gases.
Question 2: How much of the greenhouse effect is due to carbon dioxide?
Answer: Carbon dioxide has been estimated to produce 26% of the Greenhouse Effect during clear skies (source: Kiehl and Trenberth, Earth’s Annual Energy Budget, Bulletin of the American Meterological Society volume 78, issue 2 (1997), pages 197-208).
Question 3: Which greenhouse gas is the largest contributor to the Greenhouse Effect?
Answer: In clear skies, water vapor produces 60% of the Greenhouse Effect (source: see above).
Question 4: Why do the answers to Questions 2 and 3 specify clear skies?
Answer: Clouds complicate the Greenhouse Effect. During the day, clouds reflect sunlight from Earth atmosphere back into space. Anyone who has ever seen a planet (Venus, Mars, etc.) in the night sky has seen sunlight that was reflected back into space by either clouds in the planet’s atmosphere or by the planet’s surface. Daytime clouds reflect sunlight back into space and thus have a cooling effect. Nighttime clouds have a warming effect on the surface and a cooling effect in the upper atmosphere due to the Greenhouse effect of the water vapor in the cloud. As we will see, much of the argument between the various scientific camps (‘warmists’, ‘luke warmers’ and ‘flat liners’) involves the roles of clouds in atmospheric temperature.
Question 5: Nature releases carbon dioxide into the atmosphere. Humans release carbon dioxide into the atmosphere. How much of atmospheric carbon dioxide release is due to humans? How much is due to nature?
Answer: The following are estimates and are not accurately measured numbers:
Nature: 210 billion tons per year (96% of total)
Humans: 9 billion tons per year (4% of total)
Source: Wikipedia – Carbon Cycle
Question: Atmospheric carbon dioxide levels are believed to have risen from 280 ppm before the Industrial Revolution to its current level of 395 ppm. If this rises to 500 ppm (a 25% increase from 395 ppm), wouldn’t this increase carbon dioxide’s contribution to the greenhouse effect by 25%?
Answer: No; however, this is another point of contention between the various camps.
Remember that the Greenhouse Effect involves the absorption of infrared energy. The Beer-Lambert equation (also called Beer’s law) tells us that the amount of electromagnetic energy (including infrared) absorbed increases logarithmically with the amount of the absorbing substance, not linearly. Some calculations:
Log (280) = 2.45 Note: 280 is the carbon dioxide level before the Industrial Revolution)
Log (395) = 2.60 Notes: 1) 395 is the current level; 2) 395 is a 41% increase over 280; 3) this 41% increase in atmospheric CO2 levels increases CO2’s absorption of IR radiation by 6%
Log (560) = 2.75 Notes: 1) 560 is a doubling of the pre-Industrial Revolution atmospheric carbon dioxide level; 2) The increase in CO2’s Greenhouse Effect is = 12.2% over the pre-Industrial Revolution amount.
However, the ‘warming’ model assumes that the increase in the Greenhouse Effect due to increased atmospheric CO2 is 3 times higher than is calculated by the Beer-Lambert equation. The reasoning is as follows:
1) As atmospheric CO2 levels increase, the amount of the Greenhouse Effect due to CO2 increases. (Note: nobody disputes this).
2) As the Greenhouse Effect increases (due to increases in atmospheric levels of CO2 and other greenhouse gases), global temperature will rise. (Note: this is reasonable, but not necessarily universally accepted).
3) As global temperatures increase, the amount of water that evaporates from surface water (lakes, oceans, etc.) increases. (Note: nobody disputes this).
4) The extra water vapor produced by the increase in temperature from the increased greenhouse effect due to the increased atmospheric carbon dioxide levels will increase the overall greenhouse effect, which will lead to higher global temperatures. (Note: this is the key issue in the scientific dispute).
The skeptics’ argument as to why this model isn’t realistic:
Now we are back to the cloud issue. Skeptics argue that the higher levels of water vapor will produce more clouds, which will reduce the amount of sunlight that reaches Earth’s surface (due to the increased reflection of sunlight by the clouds), thus offsetting the enhanced greenhouse effect due to the increased water vapor.
Question 6: We know that the glaciers are melting; doesn’t this fact prove that climate change has occurred?
Answer: We know that a major climate change occurred about 10,000(?) 12,000(?) years ago when the last ice age ended. Glaciers have been melting since this global warming event began. We know that during the past ice age, virtually all non-tropical land masses were covered in glaciers.
One of the arguments between the ‘warmers’ and the ‘skeptics’ is whether or not climate change events such as the Medieval Climate Optimum and the Little Ice Age did- or did not- actually occur. If the Little Ice Age did in fact occur, as has been indicated by historical records, then presumably natural warming occurred during the 18th century that ended the Little Ice Age. There is some evidence of glacier advancement during the Little Ice Age that ended and was reversed at the end of the Little Ice Age.
Question: AARGH! At least we know how temperature changed during the 20th century, don’t we?
Answer: This is another point of contention. The problem is how ground-based temperatures were (and still are) measured. The issues are where the temperatures are measured, how they are measured and how much the temperatures are being influenced by near-by heat sources such as urbanization, pavement and possibly even cooking grills.
Skeptics argue that the only reliable temperature record is the one that uses data from weather satellites, as these avoid the problems discussed. Warmists tend to discount the satellite temperature record. The latest satellite-based temperature record is shown below; it should be noted that this data begins in 1979 as this was when the first satellite was launched. The ‘zero point’ on the horizontal axis is the average of this data from 1981 – 2010; when the line is above zero, the temperature is warmer than this 30 year average; when it is below, the temperature is lower than this 30 year average.
Note that although July 2012 may have been the warmest July in the US during the time period that modern temperature records have been produced, globally, July 2012 was 0.28o C warmer than the average global temperature during 1981 to 2010.
Models, Temperature Predictions and Realties
Warmists argue that their Global Climate (computer) models accurately describe Earth atmosphere and that the models’ predictions of future temperatures are accurate. Skeptics argue that the models do not accurately describe Earth atmosphere, particularly with respect to clouds and that the past predictions were not accurate. ♦
DR. NAIL is Chair of the Chemistry Department at Oklahoma City University
A Tale of Technology, Risks, and Unintended Consequences
Invited Post by John Nail, Ph.D.
Modern pharmaceutical drugs, along with other medical technologies, have doubled life expectancies during the past 100 years. There is a real possibility that the news media, politicians, lawyers and gullible jurors will put an end to these advances. The news media appears to enjoy reports of ‘dangerous products’ that the politicians embrace as it gives them a way of diverting attention from their own lack of competence. Trial lawyers earn their living by convincing gullible jurors that the ‘big companies’ are to blame for anything adverse that occurs when someone uses their product. Often, the result of the media and political orgy is that useful products are removed from the market. This is the story of a drug that was removed due to safety reasons and the possible unintended consequences of its removal. Before we proceed, we should note that 1) ALL technologies have unintended adverse consequences, 2) It is not possible to know what these consequences will be until large numbers of people start using (and abusing) the product, and 3) Decisions have unintended consequences, including often making a situation worse when trying to make it better.
Vioxx (generic name rofecoxib) was a prescription drug for the treatment of pain from osteoarthritis (this is the type of arthritis that occurs when bones are rubbing against each other). Vioxx was approved for use by the US Food and Drug Administration on May 20, 1999; it was discontinued on September 30, 2004, due to concerns regarding cardiovascular problems (heart attacks and strokes) in patients that used the drug.
A considerable amount of testing is required before the FDA will approve the use of a new prescription drug. One of the pre-approval tests involving Vioxx was the ‘VIGOR’ study – a double-blind test in which one group of participants (the study group) was given Vioxx and the other, (the control group) naproxen, a non-prescription pain reliever. This trial found that the Vioxx group members who had pre-existing heart attack risk factors were four times more likely to have a heart attack than were the participants of the naproxen group with pre-existing heart heart attack risk factors. One interpretation of this result is that Vioxx causes heart attacks in people with heart disease risk factors; the other interpretation is that naproxen prevents heart attacks in people with heart disease risk factors. We know that aspirin prevents heart attacks, so it is reasonable to assume that naproxen prevents heart attacks. Interestingly, there was no difference between the two groups in regards to deaths from heart attacks, nor was there a difference between heart attack rates between the two groups in people who did not have heart attack risk factors. Thus, if a person has heart attack risk factors and should be taking a ‘baby’ aspirin each day, that person’s chance of a heart attack is four times higher if the person is taking Vioxx instead of naproxen.
APPROVe was another Vioxx clinical study in which the ‘control’ group was given a placebo (sugar pill). During this study, it was found that, after 18 months, the participants who were being given Vioxx were almost twice as likely to have a heart attack than were the participants who were being given sugar pills. However, the heart attack deaths (mortalities) were equivalent between the two groups. Thus, as in the VIGOR study, the rate of heart attacks were greater in the Vioxx group, but the rate of heart attack deaths were identical between the two groups. The APPROVe results and the resulting media uproar resulted in Vioxx being withdrawn from sale.
Bextra and Celebrix were Vioxx’s competitors. Bextra was withdrawn due to increased heart attack and stroke risks in patients who were taking Bextra while recovering from heart surgery. Celebrix is still being marketed, however, it has a ‘black box warning’ that it should be used only as a last resort on patients who have heart disease or a risk of developing heart disease.
One of the arguments for why Vioxx and Bextra should no longer be sold was that “osteoarthritis patients have other options for relieving their pain” – these other options are over the counter pain relievers (aspirin, naproxen, acetaminophen, ibuprofen, etc.), and opioids such as codeine, demerol and morphine. While opioids are among the most effective of all pain relievers, they have unintended effects such as narcosis (sleep inducement) and physical and psychological addition. Rush Limbaugh allegedly became psychologically addicted to opioids.
Recently, a study determined that switching osteoarthritis patients from Vioxx or Bextra to opioids has resulted in a fourfold increase in falls and broken bones in elderly osteoarthritis patients. A quote from an article in The Journal of Higher Education that discussed this increase in falls and broken bones,
“Somebody should have thought more carefully about the elderly before making these recommendations” (switching elderly osteoarthritis patients from drugs such as Vioxx to drugs such as codeine to control their pain), says Bruce N. Cronstein, a professor of medicine at New York University’s Langone Medical Center. “Falls in an elderly population can be very dangerous, leading to long hospitalizations and even death.” ….”We don’t have wonderful alternatives for treating chronic pain”… Long treatment with aspirin or ibuprofen, for instance, often irritates and damages the digestive system…Falling down seems like one of the most obvious adverse effects, says Dr. Cronstein. “That’s not rocket science,” he notes. “The elderly are more frail. They have a host of factors that could lead to falls. If you add something that makes you a little unsteady, it increases the risk”
Thus, the choices for people with osteoarthritis are (and were) over the counter drugs such as aspirin and ibuprofen, that can cause digestive damage from long-term use, prescription narcotics that increase a person’s risk of falling by fourfold, or prescription drugs such as Vioxx that, in people with heart attack risks, increased their risk of heart attacks, however, these did not increase their risk of dying from a heart attack. The media and political storm that resulted in Vioxx and Bextra being removed from the market appears to have resulted in either more pain in osteoarthritis sufferers due to their not treating the pain, or the possibility / reality of digestive system damage, or increased use of opioid drugs which has produced an increase in falls and broken bones.
Elderly patients also don’t have the best of memories. This sometimes leads to accidental overdoes of opioid (and other drugs) due to a patient’s forgetting that they had already taken their pain pill for that day. It is also known that people often develop tolerances to opioid drugs, making these increasingly less effective.
I leave you with a comment uploaded to the discussion section from The Journal of Higher Education article:
‘“In addition to the danger of increased falls, the removal of Vioxx and Bextra from the market meant increased arthritis pain for millions of people (including me). The possibility of heart problems for some won out over the reality of pain for millions. I hope that not all medical decisions are made this way” John C.’
Unfortunately, this is how medical decisions are made during a media generated crisis. The next time that there is a media uproar about ‘dangerous medical products’, remember that the unintended consequences from the product’s alternatives may be worse than the product’s unintended consequences.
DR. NAIL is Chair of the Chemistry Department at Oklahoma City University
[*sing to the tune of The Witch is Dead]
Invited Post by John Nail, Ph.D.
On his September 14 show, Dr. Oz apparently claimed that apple juice contains large amounts of arsenic, making it unsafe for consumption, particularly for children. Previously, Dr. Oz has made claims about ‘unsafe’ amounts of contaminates in drinking water. While making claims of ‘____ (fill in the blank) is dangerous, we must protect the children from ____ (fill in the blank)’ may make for good daytime television, these claims are lousy science. Unfortunately, the typically American combination of poor thinking skills, pathetically poor science understanding and the ‘evil big business’ news story template make many people susceptible to accepting the scientifically unsupported, sensational claims made by entertainers such as Dr. Oz. Physicians such as Dr. Oz, a heart surgeon, rarely are scientists.
Yes, chemists can detect arsenic in apple juice; we can also detect arsenic in drinking water (both tap and bottled) and virtually every food, both ‘organic’ and conventional, in any supermarket, including Whole Foods. This ability to detect arsenic isn’t due to wide-spread arsenic contamination, it is more due to the incredible technology that allows Analytical Chemists to detect increasingly minute levels of chemical substances. Nowadays, we can almost always find any of the ‘bad’ substances in almost every food. The real issue is that minute amounts of bad substances are not necessarily harmful.
Decades ago, (real) scientists argued over the competing ‘Linear’ vs ‘Threshold’ hypotheses regarding exposure to toxic substances and exposure to radiation. Today, now that we can find almost every naturally-occurring toxic substance in everything, we’ve learned that the body has natural mechanisms for removing toxic substances and repairing damage from toxic substances. The ‘Threshold’ model, the assumption that permanent damage only occurs when the exposure happens faster than the body can fix things, is now widely supported. The ‘Linear’ model, in which it is assumed that any amount of exposure to a bad substance will cause some permanent damage, is only supported by government regulatory agencies, activist scientists and scientists whose scientific careers have been based on researching the effects of very low exposures. We’ve known for five centuries that ‘the dose makes the poison’, which means that everything (including water) is toxic in high enough amounts and nontoxic in low enough amounts.
Small amounts of arsenic (and other ‘bad’ substances, including radiation) are in our food and water because nature put them there. As an example, much of the soil in central Oklahoma contains naturally high amounts of arsenic.
Consider someone who lives in central Oklahoma and has high arsenic soil in their backyard. If an apple tree grows in the high arsenic soil, the apples produced by the tree will contain high amounts of arsenic, even if the tree is grown using ‘organic’ techniques (only chemicals produced by nature are used for fertilizer or pest control). The person who consumes the apples from the tree will be exposed to arsenic.
The Federal Government (EPA) set a limit of no more than 10 ppb (parts per billion) of arsenic for drinking water and mandates that the water in every municiple water distribution system be tested at least once every calendar quarter to measure the levels of ‘bad’ substances such as arsenic and chloroform. Operationally, no water distribution system would let water with an arsenic level of above 9 ppb go into the system; this ensures that the water stays below the 10 ppb level. One part per billion of arsenic in water (or apple juice) is 0.000001 gram of arsenic in one liter (effectively one quart) of water or apple juice. Water with the federally allowable limit of 10 ppb of arsenic contains 0.00001 gram of arsenic in each liter of water.
Dr. Oz claimed to have found as much as 23 ppb of arsenic in apple juice. The Federal Government (USDA and FDA) also monitors the amounts of ‘bad’ substances, such as arsenic in food. While the USDA and FDA don’t set absolute limits as does the EPA, they would be ‘concerned’ about apple juice that contained 23 ppb of arsenic; when the FDA test food products, they rarely find arsenic levels higher than 13 ppb. (http://www.medicinenet.com/script/main/art.asp?articlekey=149396 ).
The USDA classifies a ‘serving’ of apple juice as one cup. Consider a child who drinks four cups of apple juice every day for ten years and that all of the apple juice that she drinks contains 23 ppb of arsenic. During this ten years, she would ingest a total of 0.083 grams of arsenic and 1,708,200 calories from the apple juice; 14,600 juice boxes would go to the landfill.
Some of our ancestors had to hunt wild animals and gather wild plants to keep from starving. Occasionally, instead of eating the wild animals, the wild animals ate them. Other ancestors had the relative comfort of being able to sometimes grow some of their own food. If the crop didn’t make, they often didn’t eat. Today, food more abundant and safe less expensive as it every has been during the history of mankind.
It appears that some people need something to worry about and Dr. Oz, as well as other entertainers, make a living by making mountains out of anthills.
DR. NAIL is Chair of the Chemistry Department at Oklahoma City University
May. 23, 2010
Scientific American Columnist Martin Gardner, Prolific Math And Science Writer, Dies At 95
(AP) NORMAN, Okla. (AP) – Prolific mathematics and science writer Martin Gardner, known for popularizing recreational mathematics and debunking paranormal claims, died Saturday. He was 95.
Gardner died Saturday after a brief illness at Norman Regional Hospital, said his son James Gardner. He had been living at an assisted living facility in Norman.
Martin Gardner was born in 1914 in Tulsa, Okla., and earned a bachelor’s degree in philosophy at the University of Chicago.
He became a freelance writer, and in the 1950s wrote features and stories for several children’s magazines. His creation of paper-folding puzzles led to his publication in Scientific American magazine, where he wrote his “Mathematical Games” column for 25 years.
The column introduced the public to puzzles and concepts such as fractals and Chinese tangram puzzles, as well as the work of artist M.C. Escher.
Allyn Jackson, deputy editor of Notices, a journal of the American Mathematical Society, wrote in 2005 that Gardner “opened the eyes of the general public to the beauty and fascination of mathematics and inspired many to go on to make the subject their life’s work.”
Jackson said Gardner’s “crystalline prose, always enlightening, never pedantic, set a new standard for high quality mathematical popularization.”
The mathematics society awarded him its Steele Prize for Mathematical Exposition in 1987 for his work on math, particularly his Scientific American column.
“He was a renaissance man who built new ideas through words, numbers and puzzles,” his son, a professor of special education at the University of Oklahoma, told The Associated Press.
Gardner also became known as a skeptic of the paranormal and wrote columns for Skeptical Inquirer magazine. He wrote works debunking public figures such as psychic Uri Geller, who gained fame for claiming to bend spoons with his mind.
Most recently he wrote a feature published in Skeptical Inquirer’s March/April on Oprah Winfrey’s New Age interests.
Former magician James Randi, now a writer and investigator of paranormal claims, paid tribute to Gardner on his website Saturday, calling his colleague and longtime friend “a very bright spot in my firmament.”
He ended his Scientific American column in 1981 and retired to Hendersonville, N.C. Gardner continued to write, and in 2002 moved to Norman, where his son lives.
Gardner wrote more than 50 books.
Gardner was preceded in death by his wife, Charlotte. Besides James Gardner, he is survived by another son, Tom, of Asheville, N.C.
Copyright 2010 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[I'm glad my nephew, Cole, was able to meet this amazing person] – thank you, Amy…..
On Christmas day, a terrorist attempted to explode 80 grams of the explosive PETN on an airliner while it was landing at the Detroit airport; the explosive had been hidden in the terrorist’s underwear – specifically, the crotch of his underpants. SEE ABC NEWS STORY.
Not surprisingly, this incident has generated a considerable amount of political debate during this holiday week, along with new rules for passenger behavior during airline flights. One aspect that has not been discussed is ‘could 80 grams of PETN in a person’s crotch have caused an airliner crash?’
Before answering this question, I should mention that while I am not an explosives expert, I do teach Chemistry and a Weapons of Mass Destruction course that includes a very basic discussion of explosives.
One issue is that 80 grams (about 45 cc, or 1.6 ounces by volume) of PETN isn’t a lot of explosive. As demonstrated by the show Mythbusters, considerably more than 1.6 (volume) ounces of a powerful explosive (such as PETN) is required for a highly damaging explosion (http://www.youtube.com/watch?v=VKZZLw5kTJk); the 3 cc of the high explosives (possibly PETN) in the video blew a grapefruit sized hole in a foam board. The terrorist had 45 cc of PETN, which would blow a larger hole in a foam board.
Another issue is the location of the PETN – specifically that it was inside the crotch area of the terrorist’s clothing. Had the PETN exploded, it would have seriously damaged his legs and crotch area – the only possibly fatality would have been to the terrorist, as his body would have absorbed the blast and, ironically, protected the other passengers.
Could the terrorist have caused the plane to crash? The answer is: highly unlikely, as 80 grams of PETN does not have enough explosive energy to have seriously damaged the plane fuselage or damage the controls.
At worse, the terrorist could have blown a hole in the fuselage, which would have caused the plane to depressurize. It is highly unlikely that this would have lead to any fatalities as the passengers would have the emergency breathing masks. However, for 80 grams of PETN in a person’s crotch to be able to blow a hole in the plane fuselage, the terrorists would have needed to be in a window seat and had his crotch pressed against the fuselage during the explosion. Even had this happened, the explosive shockwave would have taken the ‘path of least resistance’ which is through the terrorist’s body and not the fuselage wall. Once again, the terrorist would have fared much worse than the passengers, crew or plane.
While we often want absolute safety, we need to admit that this is impossible. It is unreasonable and impractical to find every hidden small packet of a substance that may be incorporated into a person’s clothing. The only way to ensure that another underwear bomb incident cannot occur would be to have all airline passengers remove all of their clothing prior to boarding; presumably, everyone would either fly naked or would be issued secure clothing for the flight. This wouldn’t ensure absolute safety either as a terrorist could always have an explosive surgically inserted in his or her body.
One problem with terrorism is that while we can try to guess at every method that might be used to attack us, however, we can never know what the terrorists have conceived but we haven’t. Consequently, they act and we react. As with all risk management issues, we must keep things in perspective; time and time again, we become overly concerned with unlikely risks and ignore those that are more likely to harm us.
JOHN NAIL, Ph.D., is Chair of the Chemistry Department at Oklahoma City University
………… The NASA Luna Reconnaissance Orbiter (LRO) was launched on June 18, 2009 to address landing site certification and polar illumination–among other things. Its camera images reveal the Apollo 14 lander tracks as well as the astronauts’ footprints–damaging a longstanding belief (by some) that this lunar mission was a hoax. These images are not crystal clear, but clear enough for experts to draw the conclusion that these markings are from Apollo 14. So my question for conspiracy theorists is this: do you still maintain the Apollo 14 mission was a hoax? Do you still maintain the craft didn’t land on the Moon’s surface? Do you still maintain that the astronauts didn’t step foot on the lunar soil?