Activity

Bad Press, Bad Chart

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Although terms like “fake news” and “alternative facts” are now commonplace, it is not as if the recognition of these terms has resulted in substantially less bullshit in the popular press media. To be sure, however, not all problematic products of journalism or company PR (e.g., stories, charts, infographics, tweets, etc.) are intentionally deceptive – sometimes it’s simply a lack of competence. This activity requires you to find 3-5 examples of “bad press.” 

For each of your 3-5 examples, include a link to the story or an image-capture of the chosen chart, tweet, and so forth. Also, answer the questions below for each example.

How did you come across the example? Was it pushed to you, shared with you, etc.?

What makes the example bullshit?

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Does the example exemplify any specific concepts from the class ppt? 

Making th

e

Sausage:
The Re

c

ipe Matters

Our Journey So Far…

Defining BS

Detecting BS
(logically)

Describing where
and why BS thrives

Deciphering BS
(statistically)

Detecting BS
(methodologically)

Constructs

Hypothetical construct is a concept that:
§ Does not have a single observable referent
§ Cannot be directly observed
§ Has multiple referents, but none are all-inclusiv

e

Cronbach & Meehl (1955)

Examples:
§ Center of mass
§ Company performance
§ Service quality
§ Intelligence
§ Team mental models

Observations

Observations are:
§ Collected information about a phenomenon
§ Can be sensed or measured with instruments
§ Can be qualitative or quantitative
§ Often only referred to as ‘variables’ (imprecisely)

Examples:
§ Height, weight, physical attributes
§ Sales volume, productivity, scrap rate
§ Direct/indirect costs
§ Employee or consumer behavior
§ Survey responses, test scores, etc.

From Definitions to Positions…

For our purposes, a variable is a measurement that we
use as input into a model to be analyzed or tested…

IV
(independent

variable)

DV
(dependent

variable)

X Y

Predictor,
Antecedent,

Intervention

Criterion,
Consequent,

Outcome

From Definitions to Positions…

Moderator – partitions the IV into subgroups that establish
domains of maximal effectiveness in regard to a given DV

IV
(independent
variable)
DV
(dependent
variable)
X Y

A moderator changes the relationship between variables
(amplifies or attenuates it). It reveals “for whom” or “under

what conditions” a relationship may change

Mod
(moderator)

From Definitions to Positions…

Mediator – a generative mechanism through which the IV is
able to influence the DV (i.e., it explains the relationship)

IV
(independent
variable)
DV
(dependent
variable)

b

A mediator conveys the influence of the IV onto the DV. It is
an intervening force. It reveals the “how” and “why” a

relationship exists

Med
(mediator)

a

c

Research Settings and Strategies

Directly
manipulate one
or more IVs

Randomly
assign units to
conditions

Test effects on
DVs &
mod/med
variables

Hold constant
confounds by
design

True
experiment ✓ ✓ ✓ ✓

Quasi-
experiment ✓ ✗ ✓ ✓

Non-
experiment ✗ ✗ ✓ ✗

Generally conducted to determine & establish cause-and-
effect relationships among variables

Generally conducted to ascertain & describe relationships
among specified variables of interest in a given situation

Generally conducted to learn about some phenomenon,
especially when there is very little prior research

General Research Purposes

Exploratory

Descriptive

Causal

It All Starts with Design…
“Though there are numerous techniques of data
analysis, no technique, regardless of its elegance,
sophistication, and power can save the research
when the design is poor, improper, confounded, or
misguided. As we have stated, and will state again,
sound inferences and generalizations from a piece
of research are a function of design and not
statistical analysis.”

~Keppel & Zedeck (1989, p. 12)

General Research Designs

Types of Strategies

Researcher manipulates 1 or more variables
to determine a causal relationship

Experiment

Observational

Longitudinal

Qualitative

Mixed method

Researcher observes (does not intervene) to
find correlations among the collected data

Observational study that involves repeated
measures over long periods of time

Researcher explores beliefs, experiences, &
perceptions through non-numerical data

Researcher blends numerical and non-
numerical data to “triangulate” findings

General Research Designs

Types of Studies

§ Randomized controlled study
§ Controlled study
§ Before-after study
§ Cohort/panel study
§ Cross-sectional study
§ Case study

Controlled Studies

Intervention

Outcome OutcomeOutcome

Outcome

Intervention

Baseline BaselineBaseline

Baseline

T
im

e

Controlled Studies (posttest only)

Intervention
Outcome
Intervention
Baseline BaselineBaseline Baseline
T
im
e

OutcomeOutcome Outcome

Before-after Studies

InterventionBaseline Outcome

Time

Cohort/panel Studies

Baseline

Time 1

Time 1

Time 2

Time 2

Time 3

Time 3

Time 4

Time 4
Time

Measurement

Cross-sectional Studies

Population Sample

Variable 1

Variable 2

Variable 3

Variable 4

Variable 5

Variable n

One Time Point

Case Studies

Time

Methods differ with respect to
Control & Fidelity

Computer Simulation
Laboratory Experiment

Field Experiment
Interview/Survey

Observation
Archival Study

Methods are Always Imperfect

D

e
g

re
e

o
f

C
o

n
tr

o
l

D
e

g
re

e
o

f F
id

e
lity

Methods are Always Imperfect

Potential Benefits Potential Costs

Computer Simulation Very precise manipulations; Model Dangerous/harmful situations
Results only as good as the model;
Cannot model all relevant variables

Laboratory Experiment Provides causal evidence; Randomassignment removes confounds
Contrived setting; May lack the
complexities of the “real world”

Field Experiment Provides casual evidence; Takes place in a “real” context
Differential treatments may be
prohibited; Confounds can occur

Interview/Survey Captures in-depth data; Provides insight on experiences/attitudes
Difficult to show causal effects;
Subject to biases in poor designs

Observation Within the real or natural context;Allows researcher participation
Researcher interference; Can have
misinterpretations; Time-consuming

Archival Study Often large scale and/or broad scope; Cost effective
Typically cannot explain “why”; Has
omissions, “unmeasured” variables

Measures are Always Imperfect…

“True Score”

What should
be measured

“Actual Score”

What is really
measured

Contamination DeficiencyRelevance

Examined “Epistemic Stroop Effect,” which refers to the fact that people
involuntarily reject factual propositions that conflict with one’s knowledge of
the world. These authors asked whether opinions have a similar effect.
Conducted four separate experiments.

§ Showed 88 opinion statements on politics, social issues, personal tastes, etc.
§ E.g., “The Internet has made people more sociable [or isolated]”
§ For each statement, they made a grammatically incorrect version
§ We’re faster to verify grammatically correct statements (vs. non-grammatical)

§ Assessed extent that individuals agreed with statements

Key Findings:

§ Participants were quicker to identify statements as
grammatically correct when they agreed with the
opinion in the statement, compared with when they
disagreed

§ There was no difference in time for identifying
ungrammatical statements as ungrammatical

§ Results held even though agreement with the
opinion was irrelevant to the grammatical task

“The results demonstrate that agreement with a
stated opinion can have a rapid and involuntary
effect on its cognitive processing”

Break into small groups (3-4 people) and
address the following question:
In chapter 4, Seethaler discusses 10 specific
“context connections.”
1. Compare technologies to other technologies
2. Put findings in a geographical context
3. Consider the historical context
4. Express figure on a comprehensible scale
5. Qualify figures by circumstances where they hold true
6. Ask how the numbers being cited compare to ”normal”
7. Be careful not to be misled by averages
8. For percentages, ask “percentage of what?”
9. Reframe losses as gain or gains as losses
10. Determine if there is a context that explains an observation

Describe 2 connections and her examples
Think of one other example of each (from
life, business, prior lectures, etc.)

Lies, Damned Lies, & Science

Four “Types” of Validity

Can we infer a relationship between study variables
based on statistical results?

Can we infer an observed relationship is a causal
connection?

Can we infer measures effectively reflect the underlying
constructs and relations among these constructs

Can we infer the observed effects will generalize to other
persons, places, measures, or times?

Conclusion

Internal

Construct

External

Confounds

Generalizations

Statistical Conclusion Validity
§ Low statistical power
§ Individual heterogeneity (i.e., subject differences)
§ Context effects (i.e., extraneous environment)
§ Range restriction (i.e., artificially truncated data)

Threats to Valid Inferences

Threats to Statistical Conclusion Validity

A workforce analyst is interested in examining the
key predictors of employee turnover. Using data
collected during 2007-2009 from a large sample of
employees at over 50 firms, spanning multiple
industries, he finds several significant effects for
variables not found in previous turnover research
and is excited to write-up and publish the findings.

Context effects
Individual heterogeneity
Range restriction
Low power

Threats to Statistical Conclusion Validity
An educational researcher is testing the effects of
using a new technology platform on learning a
foreign language. To maximize the scope of the
study, she recruits students from middle school,
high school, and college. She is disappointed in
the results, which fail to show any consistent
significant effects associated with using the
technology.

Context effects
Individual heterogeneity
Range restriction
Low power

Threats to Statistical Conclusion Validity

A researcher designs a study to examine the effects
of using a homeopathic drug to reduce cholesterol
among high-risk individuals. He therefore
specifically recruits individuals who have above
average cholesterol levels to participate in the
study. Unfortunately, he does not find evidence for
the efficacy of the treatment.

Context effects
Individual heterogeneity
Range restriction
Low power

Internal Validity
§ Regression to the mean
§ Maturation (i.e., natural change over time)
§ Mortality (i.e., subject attrition)
§ Instrumentation (i.e., measurement issues)
§ Subject selection (i.e., who/what is chosen)

Threats to Valid Inferences

Threats to Internal Validity
A best-selling business book focuses on what makes
companies great. The research on which the book is based
includes in-depth analysis of 11 companies that went from
so-so performance to top-notch performance, as defined by
a sustained period of stock value dramatically beating
market and competitor values. One reason the book is
popular is because it offers straightforward company
characteristics for leaders to imitate in their own firms.

Regression to mean
Maturation
Mortality
Instrumentation
Subject selection

Threats to Internal Validity
An educational researcher wants to examine the effectiveness of
Massively Open Online Courses (MOOCs). She conducts an
experiment where students are randomly assigned to either a
MOOC course or a traditional course. The content and duration
for both courses are identical. The sample included 100 students
(50 condition). At the conclusion of the study, 37 students
complete the MOOC and 46 students complete the traditional
course. She compares scores on a knowledge test, finding that
students in the MOOC format, on average, scored 36% higher.

Regression to mean
Maturation
Mortality
Instrumentation
Subject selection

Threats to Internal Validity
A firm discovers that direct reports of new managers (less
than 1 yr. in position) have substantially lower engagement
levels than the company average. To remedy these issues,
an onboarding initiative is launched, requiring attendance in
the first month of becoming a manager. A follow-up study
18 months after the initiative shows engagement for
participating managers’ direct reports are at the level of the
company’s average.

Regression to mean
Maturation
Mortality
Instrumentation
Subject selection

Threats to Internal Validity
An energy engineer wants to assess the effectiveness of an
energy conservation program. This program included a
conservation campaign as well as an improved method for
monitoring the firm’s energy usage. The amount of energy
used was based on archival sources for the 2 years prior to
the program and 2 years following the end of the program.
The engineer found a significant decrease in energy use at
about the time when the program was initiated.

Regression to mean
Maturation
Mortality
Instrumentation
Subject selection

Construct Validity
§ Reactivity/expectancies (i.e., ‘guessing’ the importance)
§ Novelty/disruption effects (i.e., too new or dramatic)
§ Compensatory rivalry (i.e., competition, not intervention)
§ Treatment diffusion (i.e., intervention ‘leaks’)

Threats to Valid Inferences

Threats to Construct Validity
A Fortune 10 firm places “high potential” leaders in an intensive
two-week, off-site program aimed at increasing self-awareness,
learning agility, and leadership. The firm regularly collects data on
the program’s effectiveness (e.g., simulations, 360 data, etc.). The
VP of HR recently integrated an assessment that brings to bear
the “latest brain science” purported to underlie effective
leadership. The follow-up results show a significant gain in
effectiveness of the program after just the first use of the
assessment. Subsequent programs show much lower gains.

Reactivity and expectancies
Novelty and disruption
Compensatory rivalry
Treatment diffusion

Threats to Construct Validity
A finance professor embarks on an investigation of the effects
of providing up-front information describing the common
decision-making errors people make when investing. His hope
is that exposing people to such information will lessen the
likelihood of poor decision making (i.e., avoid the errors). He
recruits financial advisors from 6 top investment firms to
participate. He tracks participants’ views of the online
information module and the effectiveness of their investment
decisions for a 2-week duration after the module.

Reactivity and expectancies
Novelty and disruption
Compensatory rivalry
Treatment diffusion

Threats to Construct Validity
A commercial construction firm decides to run a test to
examine if it’s worth it to purchase new robotic bricklaying
machines. The lead engineer chooses two projects that involve
the same type of building (i.e., size, shape, materials, etc.).
One project uses the bricklaying robot and the other uses only
human bricklayers. After two weeks, the data show that the
robot is outperforming the human bricklayers by only about
5%. The firm decides the robotic bricklaying machine is not a
worthy investment.

Reactivity and expectancies
Novelty and disruption
Compensatory rivalry
Treatment diffusion

Threats to Construct Validity
The Chief Research Officer at a large software firm wants to
investigate the effects of “open office” layouts to see if this
design facilitates cooperation among employees in their
development teams. Timing is perfect as 6 teams are about to
move into a new building. She randomly assigns 3 of the 6 six
teams to floor with the “open office” and the other teams to
regular layouts (i.e., cubicles). She tracks levels cooperation for
several months in all 6 teams. The results show that cooperation
has significantly increased for all 6 teams compared to historical
benchmarks.

Reactivity and expectancies
Novelty and disruption
Compensatory rivalry
Treatment diffusion

External Validity
§ Setting specificity
§ Outcome specificity
§ Respondent specificity
§ Meditation dependency (i.e., missing ‘mechanisms’)

Threats to Valid Inferences

Threats to External Validity
A professor specializing in technology and innovation
research has found strong support over multiple studies for
the positive effects of using “design thinking” principles on
the effectiveness and efficiency of software development
teams. He decides to apply these principles in other types
of teams, including marketing teams, production teams, and
sales teams. Unlike his original research, his latest findings
are quite “mixed” in their support of the benefits of design
thinking.

Setting specificity
Outcome specificity
Respondent specificity
Mediation specificity

Threats to External Validity
A national retail store wants to understand how satisfied
customers are with in-store experiences. They hire a retail
consulting firm that posts a link to a customer satisfaction
survey on the store’s website and shares the link across
social media platforms. The results show that the vast
majority of customers have negative in-store experiences.
The store is now contemplating several potential
interventions, all of which require substantial resources.

Setting specificity
Outcome specificity
Respondent specificity
Mediation specificity

Threats to External Validity
A substantial amount of evidence shows a strong relationship
between scores from standardized tests (SAT, ACT, GMAT, etc.)
and first-year GPA. A new large-scale study examines the impact
of “test optional” policies on graduation rates and cumulative
GPA. A major finding of the study is that being “test optional”
does not have negative effects on these outcomes. The study
also finds that high school GPA is a good predictor of college
GPA. The authors claim their study suggests that standardized
tests have very little value in higher education admissions.

Setting specificity
Outcome specificity
Respondent specificity
Mediation specificity

Threats to External Validity
A public health researcher is testing the effects of a
community-based crime prevention program in economically
depressed areas. The program uses neighborhood
associations to solicit interest in organizing blocks of
“neighborhood watches.” Association members tend to be the
first “block captains,” which greatly reduces program start-up
times. So far, the results have been very promising with quick
increases in neighborhood watch participation and subsequent
reductions in overall crime incidents.

Setting specificity
Outcome specificity
Respondent specificity
Mediation specificity

Evidence-based Management:
It’s Not an Oxymoron!

Our Journey So Far…

Defining BS

Detecting BS
(logically)

Describing where
and why BS thrives

Deciphering BS
(statistically)

Detecting BS
(methodologically)

Distributions
P-values
Power

Confidence Intervals
Effect Sizes

Proxies & Ecological fallacy

Research Designs
Strategies & Studies

“The Lingo”
Threats to Validity

Toward Evidence-based Practice
Evidence-based practice is about making decisions through

the conscientious, explicit, and judicious use of the best
available evidence from multiple sources by

Asking: Translating practical issues/problems into answerable questions1

Acquiring: systematically searching for and retrieving the evidence2

Appraising: critically judging quality of the evidence3

Aggregating: Weighing and pulling together the evidence4

Applying: Incorporating evidence into decision-making processes5

Assessing: Evaluating decisions to increases success probabilities6

The Challenge of Practicing EBM

Evidence-based practice ignores practitioners’ professional experience

Evidence-based practice is all about numbers, analytics, & statistics

Managers need to make decisions quickly & don’t have time for EBM

Each company is unique, so usefulness of scientific evidence is limited

If you do not have high-quality evidence, then you cannot do anything

Good-quality evidence gives you the direct or perfect answer

False beliefs about EBM

Sources of Evidence

Scientific Literature
(empirical studies)

Organization
(internal data)

Practitioners
(professional expertise)

Stakeholders
(values and concerns)

Sources of Evidence
Scientific Literature
(empirical studies)
Organization
(internal data)
Practitioners
(professional expertise)
Stakeholders
(values and concerns)

Organizational Facts (and Errors)

Small Numbers Range Restriction

Measurement Error Confounds

Example: Using last year’s average
to predict/forecast next year’s

numbers (e.g., sales)

Example: Profiling “top
performers” to develop criteria to

use in personnel selection

Example: Using difference scores
to indicate financial performance

(e.g., profitability)

Example: Examining the impact of
training but not controlling for

trainee attributes (e.g.,
motivation, cognitive ability)

Organizational Facts (and Errors)
Small Numbers Range Restriction
Measurement Error Confounds

Solutions: aggregate data across
units or time; use confidence
intervals; form collaborations

(industry/regional groups)

Solutions: examine more than just
extreme cases; apply statistical
correction for range restriction

Solutions: use multiple indicators;
split differences into component

parts for analysis; correct for
unreliability

Solutions: use control variables;
aggregate data sets across units;

study the variables over time

The Pitfall of Patterns (and Intuition)

Trust me…20 years of
management experience

What Good Theory Does

ü Organizes (parsimoniously) and communicates (clearly)
ü Helps identify and define key problems
ü Provides a basis for making predictions
ü Gives meaning to observations (i.e., aids interpretation)
ü Scopes empirical investigations (i.e., important variables)
ü Permits generalization beyond the focal sample

Ultimate goal of theory is to explain… Why?!?
(how & when too)

Why Are Fads So Compelling?
§ Promise effectiveness and efficiency (fast!)
§ Appear simple and readily implementable
§ Help assuage anxieties about problems
§ Make users feel ‘cutting edge’
§ Often offer ‘half-truths’ (things that might work

sometimes, in some contexts)

What tells you that a model is probably a fad?

Some Telltale Signs of a Fad
§ Too simple, straightforward

§ Easy to communicate, reduces to a very small number of factors
§ Promise (even guarantee) results

§ Fad auteurs are confidently didactic – no false humility or hedging
§ Universal applicability

§ Solutions for everyone, everywhere
§ Aligned with zeitgeist

§ Resonates with trends or problems du jour
§ Novel, but not radical

§ Question assumptions; but rediscovers/repackages older ideas
§ Lively, entertaining

§ Articulate, bold, memorable; buzzwords, lists, acronyms, etc.
§ Legitimacy via gurus and star examples

§ Supported by stories of “best companies” and the status of gurus

*Miller, Hartwick, LeBreton-Miller (2004)

Sources of Evidence
Scientific Literature
(empirical studies)
Organization
(internal data)
Practitioners
(professional expertise)
Stakeholders
(values and concerns)

EBM in Action: The CAT

Hunting for Buried Treasure:
Well, buried evidence at least…

Research is a Process

Scoping the domain or literature1

Forming the questions2

Conducting the search3

Evaluating the sources4

Summarizing the literature5

What is “the literature”?
What domains are pertinent?

Feedback loop, questions
reframed as literature is learned

Look for existing summaries
(narrative reviews, systematic
reviews, meta-analyses,

etc.

)

Research is a Process
Forming the questions2

§ Define the problem in a clear, and concise statement
§ Little is known about the factors associated with Y, and Y

is expensive, important, relevant, etc. to business.
§ People (consultants, leaders, employees, etc.) think that X

is related to Y, but this seems to be based on a hunch

§ Start broad, but don’t finish until you have specifics
§ Does team-building work?
§ Does working virtually improve performance?
§ Do technology acquisitions speed innovation?

Research is a Process
Forming the questions2
§ Define the problem in a clear, and concise statement
§ Little is known about the factors associated with Y, and Y
is expensive, important, relevant, etc. to business.
§ People (consultants, leaders, employees, etc.) think that X
is related to Y, but this seems to be based on a hunch

§ Start broad, but don’t finish until you have specifics
§ Does team-building work?
§ Does working virtually improve performance?
§ Do technology acquisitions speed innovation?

§ Does team-building work?
§ What is a ‘team’?
§ In what contexts/settings?
§ What counts as ‘team-building’?
§ What does ‘work’ mean (outcomes, time period, etc.)?

Research is a Process
Forming the questions2

Some tips for asking good questions

§ Write answerable questions
§

PICOC

Population Who? Type of employee, group, etc.

Intervention What or How? Technique, factor, treatment

Comparison Compared to what? Other techniques, factor, treatment

Outcome Desired consequence? Purpose or criteria

Context Under what circumstances? Organization, sector, industry, etc.

Forming an Answerable Question
Imagine you are a consultant. Your client is the
board of directors of a large health-care
organization. The board has plans for a merger with
a smaller healthcare organization in a nearby town.
However, it has been said that the cultures differ
widely between the two organizations. The board
asks you if this organizational culture difference will
impede the success of the merger. Most of them
sense that cultural differences matter, but they want
evidence-based advice…

What else would you like to know?

What kind of Population are we talking about? Middle managers,
back-office employees, medical staff, clerical staff?

What kind of Outcome are we aiming for? Employee productivity,
return on investment, profit margin, competitive position,
innovation power, market share, customer satisfaction?

And how is the assumed cultural difference assessed? Is it the
personal view of some managers or is it measured by a validated
instrument?

P:

O:

C:

More information…
According to the board, the primary objective of
the merger is to integrate the back-office of the
two organizations (IT, finance, purchasing, facilities,
personnel administration, etc.) in order to create an
economy of scale. The front offices and primary
processes of the two organizations will remain
separate. The cultural difference is not objectively
assessed (it is the perception of the senior
managers of both organizations).

PICOC
P:
O:

I:

C:

Back-office employees in a healthcare organization

C:

Merger, integration of back office

Status quo (current state)

Economy of scale (efficiency, cost reduction, etc.)

Different organizational cultures

Forming an Answerable Question
“I lead the primary development team for a medium-
sized software firm. For years, we’ve used
conventional project management (PM) techniques
with great success (e.g., waterfall, critical path,
rational unified process). However, I’ve been reading
more and more articles about the PM approach
referred to as Agile or Scrum. Everything I read extols
the value of Agile, especially its ability to allow
nimbler development. Before I implement this new
approach with my team, which would be a rather
dramatic shift in process, I’d like to know the chances
it will work.”

PICOC
P:
O:
I:
C:

Programmers on a software development team

C:

New PM approach (Agile/Scrum)

Current PM approach (traditional methods)

On-time delivery, bug rate, iterations, individual
attitudes, team cohesion/conflict/coordination, etc.

Mature team with history of success

Forming an Answerable Question
“I work for national retail organization and the SVP of
sales and service recently asked for my opinion on a
potentially new management program for the
company. She knows that I just finished my graduate
degree and during our last meeting, she remarked,
“The leadership team has been discussing the fact
that we don’t really use non-financial rewards for our
salespeople and they wondered if the company
should be doing so. What I’d like to know is whether
or not these kinds of rewards engage employees.”

PICOC
P:
O:
I:
C:

Salespeople at a national retail organization

C:

Non-monetary rewards

Current approach (baseline)

Employee engagement

No prior use of non-financial rewards

Research is a Process
Conducting the search3

§ Popular press/journalistic sources
§ Wall Street Journal, Financial Times, Businessweek,

Harvard Business Review, MIT Sloan Review

§ Government/industry/company sources
§ FRED, Census, BLS, MayflowerGroup, McKinsey Reports

§ Academic sources
§ ABI/INFORM, Business Source Complete, PsychInfo, ERIC,

Web of Science, Google Scholar

Research is a Process
Conducting the search3

Some tips for searching

§ Two general types of searching

§ “Building blocks” method (searching with terms)
§ Use PICOC terms; synonyms; related subjects;

narrower or broader terms

§ “Snowball” method (searching a known source)
§ Ancestry – cites used by the found source itself
§ Descendent – sources that cite the found source

Research is a Process
Conducting the search3
Some tips for searching

§ Follow a concentric search pattern

1. Subject/Thesaurus (SU)
2. Title (TI)
3. Abstract (AB)
4. Anywhere

Synonyms,
spellings,

etc.

Research is a Process
Evaluating the sources4

Some tips for evaluating located sources

§ Look for “seminal” articles

§ A publication that summarizes and/or extends scientific
thought on a given topic

§ E.g., systematic reviews, meta-analyses, theoretical
papers, even some single empirical studies

§ Citation count is a rough-cut indicator of seminal articles

Research is a Process
Evaluating the sources4
Some tips for evaluating located sources

§ Consider journal quality

§ General indicators of journal quality (there’s some debate here)

§ Double-blind, peer-reviewed
§ Acceptance rates (5-10% “best”; <20% “very good”) § Impact Factors (JCR, SJR, WoS, SNIP) § Reputational lists (ABS list, FT top 50, UT-Dallas)

Research is a Process
Summarizing the literature5

§ Reading academic journal articles

§ Common components Abstract
Literature Review

Hypotheses

Methods

Results

Discussion

References

Research is a Process
Summarizing the literature5

§ Critically appraised topics
§ Rapid evidence assessments
§ Systematic reviews

§ Narrative reviews
§ Meta-analysis

Underlying Logic of Meta-analysis
• Meta-analysis cumulates effects across studies

whereas a single study cumulates scores
across participants

ID Variable 1

Person 1 1.25

Person 2 1.75

Person 3 3.25

Person 4 4.00

Person 5 … 4.15

Person i 4.58

Study Effect (r)

Study 1 .23

Study 2 .55

Study 3 .34

Study 4 .27

Study 5 … .61

Study i .42

Underlying Logic of Meta-analysis
• Meta-analysis focuses on the

direction and magnitude of
the effects across studies,
not statistical significance

*Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for
training of researchers. Psychological Methods, 1, 115-129.

Example:

21 studies that examined the
relationship between an given
employment test and job
performance

§ 38% statistically significant
§ avg. effect size (r) = .33

Underlying Logic of Meta-analysis
• Meta-analysis focuses on the
direction and magnitude of
the effects across studies,
not statistical significance
*Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for
training of researchers. Psychological Methods, 1, 115-129.
Example:
21 studies that examined the
relationship between an given
employment test and job
performance

§ All CIs overlap, even for the
smallest and largest r

§ avg. (corrected) r = .22

• Does downsizing promote a firm’s
competitiveness and/or performance?

Narrative Review Example

– Less job involvement, org.
commitment, & job satisfaction

– More workplace conflict, less
autonomy & support

– More voluntary turnover &
absenteeism

– Declines in job performance,
creativity, & quality

– Announcements have negative
effects on stock price

– Most research shows negative
effect on firm profitability

– Negative effects on reputation
– No consistent evidence of

positive impacts on sales growth,
labor productivity, or R&D
investments

Individual Outcomes (k=29) Organizational Outcomes (k=36)

Datta et al. (2010): reviewed 91 studies (1984-2008)
Examine the antecedents and consequences of employee downsizing,
defined as “a planned set of organizational policies & practices aimed at
workforce reduction with the goal of improving firm performance”

• How substantially different are women
compared to men?

Gender “Differences Hypothesis”

Since 1992, sold over 50 million
copies, with multiple reprints…

And, now it’s an Off-Broadway
play!

Gender “Similarities Hypothesis”

Examined sex differences on a host of factors
§ E.g., cognitive abilities, communication, personality traits,

psychological well-being, work behavior, motor skills,
moral reasoning

Findings:
§ 78% of studies showed no meaningful differences
§ In a handful of cases, major differences were found (e.g.,

motor skills, sexuality, physical & verbal aggression)

The data simply do not support the widely held belief
that men and women are polar opposites

Hyde (2005): Reviewed 45 meta-analyses

• Are 10,000 hours of deliberate practice
necessary for becoming an expert?

The “10,000 Hour Rule”

The “10,000 Hour Rule”

• Is “grit” the most important predictor
of success in today’s world?

“While intelligence matters, a high
IQ or talent or any other factor, there
is no greater predictor of success.
The number one predictor of a
person’s success is their unflagging
commitment to a long-term goal.”

“Grit matters more than any other
talent or trait. The key to success is
grit.”

• Is EQ more important to job success
than IQ?

Personal Attribute
(criterion: job performance) b

Relative Weights
(% of R2)

Cognitive ability .66** 69.0%

Emotional intelligence .33** 13.2%

Neuroticism .18 1.5%

Extraversion .01 1.0%

Openness -.27 4.3%

Agreeableness -.01 0.8%

Conscientiousness .28* 10.2%

R2 = .49**

The Lure (and Lore) of Big Data:
Beware of Big Data Bigfoot!

“This is a world where massive amounts of data and applied mathematics
replace every other tool that might be brought to bear. Out with every theory of
human behavior…Who knows why people do what they do? The point is they do
it, and we can track and measure it with unprecedented fidelity. With enough
data, the numbers speak for themselves.

There is now a better way. Petabytes allow us to say: ‘Correlation is enough.’ We
can analyze the data without hypotheses about what it might show.”

The Lore…

Types of Business

Data

Transaction data1

Reference data2

Master data3

Metadata4

‘Big Data’ data5

Transactions, purchases, business records, etc.

Categorization, classification, lookup data, etc.

Subject-specific, links all company-specific data

Data about data (what, where, why, when, how)

It’s just data…it becomes “big” when it gains

Volume – massive amounts
Velocity – speed collected
Variety – extreme heterogeneity

Big Data, Big Impact: How and Where?

What Does Big Data Promise?

Living Up to The Promise?

Living Up to The Promise?

What Does Big Data Do?

What Does Big Data Do?
Common “Algorithm” Tasks
1.Classification
2.Regression (aka predictive modeling)
3.Similarity matching
4.Cluster analysis
5.Co-occurrence analysis (aka association rule learning)
6.Profiling (aka neural networks)
7.Link prediction (aka network analysis)
8.Data reduction
9.Causal modeling

Machine Learning and Algorithms

ALGORITHM

Data
Data
Data

Output

$

$ $

$

All AI, All of the Time?

All AI, All of the Time?

All AI, All of the Time?

All AI, All of the Time?

All AI, All of the Time?

“You cannot legitimately test a hypothesis on
the same data that first suggested that
hypothesis. The remedy is clear. Once you
have a hypothesis, design a study to search
specifically for the effect you now think is
there. If the result of this test is statistically
significant, you have real evidence at last.”

“Big Data
Hubris”

“Assumption of Infallibility”

“It’s troubling enough that British teenager
Molly Russell sought out images of suicide
and self-harm online before she took her
own life in 2017. But it was later
discovered that these images were also
being delivered to her, recommended by
her favorite social media platforms. Her
Instagram feed was full of them. Even in
the months after her death, Pinterest
continued to send her automated emails,
its algorithms automatically
recommending graphic images of self-
harm, including a slashed thigh and
cartoon of a young girl hanging. Her father
has accused Instagram and Pinterest of
helping to kill his 14-year-old daughter by
allowing these graphic images on their
platforms and pushing them into Molly’s
feed.”

“Unlike a human examiner/judge, a computer vision algorithm or
classifier has absolutely no subjective baggages [sic], having no
emotions, no biases whatsoever due to past experience, race,
religion, political doctrine, gender, age, etc., no mental fatigue, no
preconditioning of a bad sleep or meal. The automated inference
on criminality eliminates the variable of meta-accuracy (the
competence of the human judge/examiner) all together.” (p. 2)

“…are a perfect match, and their agenda appears to be to create a political movement where
Soros and his political machine and Clinton are two of the only major players. This is the first
time Soros and Clinton have been caught on tape directly colluding in promoting the same
false narrative. One of the key revelations in the leaked audio was Clinton’s admission to a
Russian banker that she knew about the Uranium One deal before it was approved by
Congress. Clinton was shown sharing the same talking points that were originally drafted by a
Fusion GPS contractor hired by an anti-Trump Republican donor. The leaked audio is the
clearest evidence yet that the Clinton campaign and the Hillary Foundation colluded with
Fusion GPS to manufacture propaganda against President Trump.”

Elon Musk and Sam Altman cofounded a
research institute called OpenAI to make new
AI discoveries and give them away for the
common good.

AI system designed to learn the patterns of
language (very accurate). But when researchers
configured the system to generate text…

Type in the phrase:
“Hillary Clinton and George Soros”…

“Tom Simonite does not keep it simple. He doesn’t give you enough info on a subject to
make the reading of the book enjoyable. He has over 400 pages of footnotes, so that is a way
of getting your work for a subject out of the way. And of course, you never really feel like the
author has a clear vision of his subject. He does not give you enough details on how a group
of people is going to come together to solve a problem or come about a solution to a
problem. This book was so depressing to me, I can’t even talk about it without feeling like I
want to punch the kindle.”

Elon Musk and Sam Altman cofounded a
research institute called OpenAI to make new
AI discoveries and give them away for the
common good.
AI system designed to learn the patterns of
language (very accurate). But when researchers
configured the system to generate text…

Prompted to write a 1-star review:
“I hate Tom Simonite’s book”…

Proceedings of the 33rd Annual ACM Conference (2015)

Population = 27% female; Images = 11%

Population = 34% female; Images = 30%

Population = 91% female; Images = 97%

A Cautionary Tale…

A Cautionary Tale…
“The Enlightenment sought to submit
traditional verities to a liberated, analytic
human reason. The internet’s purpose is to
ratify knowledge through the accumulation
and manipulation of ever expanding data.
Human cognition loses its personal
character. Individuals turn into data, and
data become regnant.”

1. AI may achieve unintended results
2. In achieving intended goals, AI may

change human thought processes and
human values

3. AI may reach intended goals, but be
unable to explain the rationale for its
conclusions

More Uplifting Quotes…

“Big data is the idea that a sufficiently large pile of
horseshit will (with a probability of one) somehow
contain a pony…”

~ Carl Bergstrom

“Big data is like teenage sex: everyone talks about it,
nobody really knows how to do it, and everyone
thinks everyone else is doing it, so everyone claims
they are doing it…”

~ Dan Ariely

Calculate your order
Pages (275 words)
Standard price: $0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back
If you're confident that a writer didn't follow your order details, ask for a refund.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00
Power up Your Academic Success with the
Team of Professionals. We’ve Got Your Back.
Power up Your Study Success with Experts We’ve Got Your Back.

Order your essay today and save 30% with the discount code ESSAYHELP