Psychology
Remember, reactions are the amount of material that will fit on a one page, double-spaced, typed document. Part of the exercise is for you to get right to your point and justify it briefly. Sources are open, but I’d prefer some element of empirical research. So, if you see something in the newspaper and want to react to it, try to track down the original research. Or, find some research that supports or refutes the information in the newspaper and discuss that. Show me that you’re thinking, include some cognitive stuff, and read some of the primary literature and I will be pleased.
What will get me excited about a reaction paper:
· React based on something else you’ve learned in the class (“when we discussed language, you said…but this article said…” or “here’s another example of…”). Bring things together in a new and interesting way.
· React based on something you know about your area of psychology that relates to Cognitive Psychology.
· How does this idea lead to new research questions?
· Make me say “this person is insane, but that’s a really cool idea.” Explore absurd places to take the research.
What won’t get me excited about a reaction paper:
· “This article was really easy/hard to read/understand.”
· A personal anecdote; overturning data with an anecdote
· “There were only five participants in the study which seems like too few.” I don’t want a showboating critique, talk to me about ideas.
· Two pages of summary followed by “I really liked this article.”
· A “reflection.” In fact, calling it a reflection report will piss me off.
Cog. Psych WWR #1
In class last week the Chinese Room argument was brought up in reference to how symbols are grounded. I had never heard of the Chinese Room or John Searle so I wrote it down to investigate after class. Further into the discussion the analogy of trying to discern symbols with more symbols struck an immediate cord with me. My six year old daughter had recently finished reading a beginners chapter book on Helen Keller and in the story it tells of this momentous moment where Helen finally understood that the hand movements she was making equated to a symbol for the water she was feeling. In the book Helen poignantly describes this as her soul’s birthday. My little one had a lot of questions about what that meant and I fumblingly try to convey the meaning. Finally I asked her to close her eyes and cover her ears tight while I made movements in her hand. I think in some small way she was able to appreciate what it would be like to have communicate that way.
In an effort to try and learn more about that moment I pursued more information about the Chinese Room and Helen Keller. Indeed I found an article published in Minds & Machines in 2006 titled “How Helen Keller used syntactic semantics to escape from a Chinese Room” by William J. Rapaport. He posits that computers can learn natural language through syntax semantics which he says is how Helen Keller came to know language. In a dictionary analogy Rapaport clearly identifies the circular process of defining word with another. At some point you must have understanding or meaning of a word to get out of this loop. He calls this a closed loop. Our minds work in much the same way per Rapport “More significantly, our brain is just such a closed system: all information that we get from the external world, along with all of our thoughts, concepts, etc., is represented in a single system of neuron firings. Any description of that real neural network, by itself, is a purely syntactic one” (Rapport, 2006).
Ultimately, Rapport explains that Helen had some semantic correspondence because she had her own rudimentary version of signs and ways of communication before her teacher Ann Sullivan arrived. While this article certainly helped me to understand how it was Helen was able to make such an extraordinary leap in understand I was not able to see how this could be applied to computers. That
could be due to my own lack of understanding rather than fault in the author’s logic. There is another article out there titled Helen Keller was not in a Chinese Room. I intend to read that as well to see if I can get anything further of the subject
Research
Why Can’t We Be More
Idiographic in Our Research?
David H. Barlow1 and Matthew K. Nock2
1
Center for Anxiety and Related Disorders, Boston University, and
2
Harvard University
ABSTRACT—Most psychological scientists make inferences
about the relations among variables of interest by
comparing aggregated data from groups of individuals.
Although this method is unarguably a useful one that will
continue to yield scientific advances, important limitations
exist regarding the efficiency and flexibility of such de-
signs, as well as with the generality of obtained results.
Idiographic research strategies, which focus on the
intensive study of individual organisms over time, offer a
proficient and flexible alternative to group comparison
designs; however, they are rarely taught in graduate
training programs and are seldom used by psychological
scientists. We highlight some of the unique strengths of
idiographic methods, such as single case experimental
designs, and suggest that psychological science will prog-
ress most efficiently with an increased use of such methods
in both laboratory and clinical settings.
Edward Tolman said to Gordon Allport ‘‘I know I should be more
idiographic in my research, but I just don’t know how to be,’’ to
which Allport replied, ‘‘Let’s learn!’’ (Allport, 1962, p. 414).
This sentiment was based on the fact that, whether it’s a labo-
ratory rat or a patient in the clinic with a psychological disorder,
it is the individual organism that is the principle unit of analysis
in the science of psychology. The intensive study of the
individual is associated with a hallowed tradition in scientific
psychology. Indeed, the founders of experimental psychology
including Fechner, Wundt, Ebbinghaus, and Pavlov studied
individual organisms with scientific approaches that would be
considered internally valid, and they strengthened these find-
ings (and began to establish generality) through replication in
other organisms (see Barlow, Nock, & Hersen, 2008).
This scientific strategy, which is fully capable of establishing
causal relations among variables, came to be known as the
idiographic approach. Gordon Allport, in his area of social
psychology, argued eloquently that the science of psychology
should attend to the uniqueness of the individual organism
(Allport, 1962). Routed deep in the structural school of
psychology, this approach also was popular in more applied
branches in psychology in the middle of the last century.
Perhaps the biggest champion of an idiographic approach in
clinical settings was Shapiro, who was advocating a scientific
approach to the study of individuals with psychopathology as
early as 1951 (e.g., Shapiro 1961, 1966). The idiographic
approach perhaps reached its zenith in psychological science
with the work of B.F. Skinner. In a famous quote, Skinner (1966)
noted: ‘‘. . . instead of studying a thousand rats for one hour each
or a hundred rats for ten hours each the investigator is more
likely to study one rat for a thousand hours’’ (p. 21). Thus,
Skinner and his colleagues in the animal laboratories are largely
credited with developing and refining an experimental
idiographic approach that came to be known as the experimental
analysis of behavior.
This idiographic approach represents a true scientific
undertaking, as independent variables are manipulated in the
context of carefully measured and repeatedly assessed depen-
dent variables. This is in contrast to the alternative nomothetic
experimental strategy, in which the researcher looks to assemble
relatively large groups of individual organisms and, in the most
straightforward application, examines the average response of
the group to the introduction of some manipulation compared
with the response to well-construed control conditions. The
major differences between the idiographic and nomothetic
traditions are, of course, approaches to intersubject variability
and the generality of findings. As variability is often consider-
able among organisms, the task of any psychological scientist is
to discover functional relations among independent variables
over and above the welter of environmental and biological
variables influencing the organism at any given point in time. A
nomothetic approach makes an implicit assumption that much of
Address correspondence to David H. Barlow, Center for Anxiety and
Related Disorders, Boston University, 648 Beacon Street, 6th Floor,
Boston, MA 02215; e-mail: dhbarlow@bu.edu.
P E R S P E C T I V E S O N P S Y C H O L O G I C A L S C I E N C E
Volume 4—Number 1 19Copyright r 2009 Association for Psychological Science
this variability is intrinsic to the organism and uses sophisti-
cated data analytic procedures to look for reliable effects over
and above this ‘‘error.’’ Significant effects are then assumed to be
more or less generalizable based on the number of individuals
included in the experimental group and the representativeness
of the population of such individuals (i.e., the use of random
sampling).
Of course, random sampling is seldom achieved in psycho-
logical research where, indeed, the goal is more often to strive
for homogeneous samples in which the generality of findings can
be very limited. Sidman (1960) made the following point a
number of years ago when discussing approaches to variability:
The rationale for statistical immobilization of unwanted variables
is based on the assumed random nature of such variables. In a
large group of subjects, the reasoning goes, the uncontrolled factor
will change the behavior of some subjects in one direction and will
affect the remaining subjects in the opposite away. When the data
are averaged over all the subjects, the effects of the uncontrolled
variables are presumed to add algebraically to zero. The composite
data are then regarded as though they were representative of one
ideal subject who had never been exposed to the uncontrolled
variables at all. (p. 162)
Addressing the issue of the generality of findings, Sidman
wrote the following:
Tracking down sources of variability is then a primary technique
for establishing generality. Generality and variability are basically
antithetical concepts. If there are major undiscovered sources of
variability in a given set of data, any attempt to achieve subject or
principle generality is likely to fail. Every time we discover and
achieve control of a factor that contributes to variability, we
increase the likelihood that our data will be reproducible with new
subjects and in different situations. Experience has taught us that
precision of control leads to more extensive generalization of data.
(p. 152)
Although the use of the idiographic approach led to signifi-
cant advances in the earliest days of laboratory-based experi-
mental psychology, as well as during early translations of
findings from psychological science to clinical applications in
the middle of the last century, it is clear that the nomothetic
strategy has become a dominant method to establish both
internal and external validity over the past few decades (Kazdin,
2003; Nock, Janis, & Wedig, 2008). One reason for this devel-
opment in applied settings was the beginning of funding of large
randomized clinical trials (RCTs) by the National Institutes of
Health.
Many such studies require 10 or more years and many
millions of dollars to perform one treatment trial. For instance,
the National Institutes of Mental Health (NIMH) funded
the treatment of depression collaborative research program
(Elkin et al., 1989). This study, which took 13 years to finish
(1977–1990), was reminiscent of earlier efforts such as the
Cambridge Somerville Youth Study conducted from 1935
through 1951, which divided delinquent boys into two groups—
one treatment group and one group that received ‘‘treatment as
usual’’ (McCord, 1978). The fact that there were no effects at 5,
10, 20, or 30 years did much to discourage efforts of this
type for at least the next 30 years. In fact, results from the NIMH
depression collaborative trial were not particularly revealing
either, as no significant differences existed among treatment
and comparison groups at any point in time. Nevertheless,
this trial provoked useful comment and a great deal of
controversy about strategic issues and the potential for
improvement in the methodology of RCTs. These trials have
improved to the point where they have become ‘‘the
gold standard’’ for establishing causal relations between inde-
pendent and dependant variables more generally, and data
emanating from these trials have deep influences on health care
practices (Barlow, 2004).
But is something still lacking? Scientifically, relying on a
relatively small group of researchers requiring enormous
amounts of time and resources to perform a single treatment trial
can be seen as an inefficient method of advancing knowledge. In
applied clinical settings, clinicians often question the applica-
bility of findings from RCTs to individuals seen in typical
clinical settings. In other words, there is a strong perception
that problems exist in generalizing a nomothetic result to an
idiographic situation. The variety of forms that these arguments
take are often cast as specific objections to RCT methodology,
and these arguments have been detailed numerous times in
the past decade (e.g., Persons & Silberschatz, 1998; Westen,
Novotny, & Thompson-Brenner, 2004).
Rather than simply critiquing nomothetic methodologies, can
we enrich these methodologies with a complementary focus on
the individual? The fact is that we have a good idea of how to be
more idiographic in our research. Although most psychological
researchers have been trained in group comparisons designs and
have relied primarily on them, exciting advances have been
made in the use of idiographic methodologies, such as the
single-case experimental design (see Barlow et al., 2008).
The flexibility and efficiency of these designs make them ideally
suited for use by psychological scientists, clinicians, and
students alike, given that they require relatively little time and
few resources and subjects and yet they can provide strong
evidence of causal relations between variables.
The time now seems right to put more emphasis on idiographic
strategies that can be integrated in a healthy way into existing
nomothetic research approaches in both clinical and basic
science settings. In clinical science, having established the
effectiveness of a particular independent variable (e.g., an in-
tervention for a specific form of psychopathology), one could
then carry on with more idiographic efforts tracking down
sources of intersubject variability and isolating factors respon-
sible for this variability (Kazdin & Nock, 2003; Nock, 2007).
Necessary alterations in the intervention protocols to effectively
20 Volume 4—Number 1
Why Can’t We Be More Idiographic in Our Research?
address variability could then be tested, once again idiograph-
ically, and incorporated into these treatments. Researchers in
basic science laboratories could undertake similar strategies
and avoid tolerating large error terms. Thus, all of psychological
science, both basic and applied, would benefit.
REFERENCES
Allport, G.D. (1962). The general and the unique in psychological
science. Journal of Personality, 30, 405–422.
Barlow, D.H. (2004). Psychological treatments. American Psychologist,
59, 869–878.
Barlow, D.H., Nock, M.K., & Hersen, M. (2008). Single case experi-
mental designs: Strategies for studying behavior change (3rd ed.).
Boston: Allyn & Bacon.
Elkin, I., Shea, M.T., Watkins, J.T., Imber, S.D., Sotsky, S.M., Collins,
J.F., et al. (1989). National Institute of Mental Health Treatment
of Depression Collaborative Research Program: General
effectiveness of treatments. Archives of General Psychiatry, 46,
971–982, 983.
Kazdin, A.E. (2003). Research design in clinical psychology (4th ed.).
Boston: Allyn & Bacon.
Kazdin, A.E., & Nock, M.K. (2003). Delineating mechanisms of
change in child and adolescent therapy: Methodological issues
and research recommendations. Journal of Child Psychology and
Psychiatry, 44, 1116–1129.
McCord, J. (1978). A thirty-year follow-up treatment effects. American
Psychologist, 33, 284–289.
Nock, M.K. (2007). Conceptual and design essentials for evaluating
mechanisms of change. Alcoholism: Clinical and Experimental
Research, 31, 4S–12S.
Nock, M.K., Janis, I.B., & Wedig, M.M. (2008). Research design. In
A.M. Nezu & M. Nezu (Eds.), Evidence-based outcome research: A
practical guide to conducting randomized controlled trials for
psychosocial interventions (pp. 201–218). New York: Oxford
University Press.
Persons, J.B., & Silberschatz, G. (1998). Are results of randomized
controlled trials useful to psychotherapists? Journal of Consult-
ing and Clinical Psychology, 66, 126–135.
Shapiro, M.B. (1961). The single case in fundamental clinical psy-
chological research. British Journal of Medical Psychology, 34,
255–263.
Shapiro, M.B. (1966). The single case in clinical psychological
research. Journal of General Psychology, 74, 3–23.
Sidman, M. (1960). Tactics of scientific research: Evaluating experi-
mental data in psychology. New York: Basic Books.
Skinner, B.F. (1966). Operant behavior. In W.K. Honig (Ed.), Operant
behavior: Areas of research and application (pp. 12–32). New
York: Appleton-Century-Crofts.
Westen, D., Novotny, C.M., & Thompson-Brenner, H. (2004). The
empirical status of empirically supported psychotherapies:
Assumptions, findings, and reporting in controlled clinical trials.
Psychological Bulletin, 130, 631–663.
Volume 4—Number 1 21
David H. Barlow and Matthew K. Nock