# 110

Question 110

Don't use plagiarized sources. Get Your Custom Essay on
110
Just from \$13/Page

pts

Describe the various problems afflicting the Uniform Crime Reports

Flag question: Question 2

Question 210 pts

Explain why it is possible to generate categorical variables from continuous data but not possible to obtain continuous data from categorical variables.

Flag question: Question 3

Question 310 pts

Briefly explain the essential differences between bar charts and histograms.

Flag question: Question 4

Question 410 pts

A professor has recently completed her grading for the final exam.  The scores can be seen in the data set below.  Unfortunately, the professor has noticed the mean is extremely low.  She is perplexed because she was certain the class had performed extraordinarily well as there were several scores in the 90s and two perfect exams.  Take a look at the grade distribution below, calculate the mean, examine the scores, and figure out why the mean was so low.

99

97

100

100

64

55

40

52

63

96

50

65

60

52

Flag question: Question 5

Question 510 pts

What is the purpose of inferential statistics?

Flag question: Question 6

Question 610 pts

Explain what is meant by ‘sampling error’.

Flag question: Question 7

Question 710 pts

A researcher has a data set of homicides occurring in large Southern metropolitan areas consisting of 304 cases with a mean of 25.68 and a standard deviation of 11.26.  The Professor has established α = .05.  Calculate the resulting confidence interval for this set of data.

Flag question: Question 8

Question 810 pts

Briefly explain the difference between a Type I and Type II error.

Flag question: Question 9

Question 910 pts

Very briefly explain what is meant by the term ‘non-directional test.’

Flag question: Question 10

Question 1010 pts

Explain why a researcher would opt for an ANOVA instead of a series of t-tests.

Statistics for Criminology and Criminal Justice

Third Edition

2

3

Statistics for Criminology and Criminal Justice
Third Edition

Jacinta M. Gau
University of Central Florida

4

FOR INFORMATION:

SAGE Publications, Inc.

Thousand Oaks, California 91320

E-mail: order@sagepub.com

SAGE Publications Ltd.

1 Oliver’s Yard

London EC1Y 1SP

United Kingdom

SAGE Publications India Pvt. Ltd.

B 1/I 1 Mohan Cooperative Industrial Area

Mathura Road, New Delhi 110 044

India

SAGE Publications Asia-Pacific Pte. Ltd.

3 Church Street

#10–04 Samsung Hub

Singapore 049483

All rights reserved. No part of this book may be reproduced or utilized in any form or by any means,
electronic or mechanical, including photocopying, recording, or by any information storage and retrieval
system, without permission in writing from the publisher.

Printed in the United States of America

Names: Gau, Jacinta M., author.

Title: Statistics for criminology and criminal justice / Jacinta M. Gau, University of Central Florida.

Description: Third edition. | Los Angeles : SAGE, [2019] | Includes bibliographical references and index.

Identifiers: LCCN 2017045048 | ISBN 9781506391786 (pbk. : alk. paper)

Subjects: LCSH: Criminal statistics. | Statistical methods.

Classification: LCC HV7415 .G38 2019 | DDC 519.5—dc23 LC record available at https://lccn.loc.gov/2017045048

All trademarks depicted within this book, including trademarks appearing as part of a screenshot, figure, or other image are included solely for
the purpose of illustration and are the property of their respective holders. The use of the trademarks in no way indicates any relationship with,
or endorsement by, the holders of said trademarks. SPSS is a registered trademark of International Business Machines Corporation.

This book is printed on acid-free paper.

Acquisitions Editor: Jessica Miller

Editorial Assistant: Rebecca Lee

e-Learning Editor: Laura Kirkhuff

Production Editor: Karen Wiley

Copy Editor: Alison Hope

5

https://lccn.loc.gov/2017045048

Typesetter: C&M Digitals (P) Ltd.

Indexer: Beth Nauman-Montana

Cover Designer: Janet Kiesel

Marketing Manager: Jillian Oelsen

6

Brief Contents

Preface to the Third Edition
Acknowledgments
Part I Descriptive Statistics

Chapter 1 Introduction to the Use of Statistics in Criminal Justice and Criminology
Chapter 2 Types of Variables and Levels of Measurement
Chapter 3 Organizing, Displaying, and Presenting Data
Chapter 4 Measures of Central Tendency
Chapter 5 Measures of Dispersion

Part II Probability and Distributions
Chapter 6 Probability
Chapter 7 Population, Sample, and Sampling Distributions
Chapter 8 Point Estimates and Confidence Intervals

Part III Hypothesis Testing
Chapter 9 Hypothesis Testing: A Conceptual Introduction
Chapter 10 Hypothesis Testing With Two Categorical Variables: Chi-Square
Chapter 11 Hypothesis Testing With Two Population Means or Proportions
Chapter 12 Hypothesis Testing With Three or More Population Means: Analysis of Variance
Chapter 13 Hypothesis Testing With Two Continuous Variables: Correlation
Chapter 14 Introduction to Regression Analysis

Appendix A Review of Basic Mathematical Techniques
Appendix B Standard Normal (z) Distribution
Appendix C t Distribution
Appendix D Chi-Square (χ²) Distribution
Appendix E F Distribution
Glossary
References
Index

7

Detailed Contents

Preface to the Third Edition
Acknowledgments
Part I Descriptive Statistics

Chapter 1 Introduction to the Use of Statistics in Criminal Justice and Criminology
▶ Research Example 1.1: What Do Criminal Justice and Criminology Researchers Study?
▶ Data Sources 1.1: The Uniform Crime Reports
▶ Data Sources 1.2: The National Crime Victimization Survey
Science: Basic Terms and Concepts
Types of Scientific Research in Criminal Justice and Criminology
Software Packages for Statistical Analysis
Organization of the Book
Review Problems

Chapter 2 Types of Variables and Levels of Measurement
Units of Analysis
Independent Variables and Dependent Variables
▶ Research Example 2.1: Choosing Variables for a Study on Police Use of Conductive
Energy Devices
▶ Research Example 2.2: Units of Analysis
Relationships Between Variables: A Cautionary Note
▶ Research Example 2.3: The Problem of Omitted Variables
Levels of Measurement

The Categorical Level of Measurement: Nominal and Ordinal Variables
▶ Data Sources 2.1: The Police–Public Contact Survey
▶ Data Sources 2.2: The General Social Survey

The Continuous Level of Measurement: Interval and Ratio Variables
▶ Data Sources 2.3: The Bureau of Justice Statistics
Chapter Summary
Review Problems

Chapter 3 Organizing, Displaying, and Presenting Data
Data Distributions

Univariate Displays: Frequencies, Proportions, and Percentages
Univariate Displays: Rates
Bivariate Displays: Contingency Tables

▶ Data Sources 3.1: The Census of Jails
▶ Research Example 3.1: Does Sexual-Assault Victimization Differ Between Female and
Male Jail Inmates? Do Victim Impact Statements Influence Jurors’ Likelihood of

8

Sentencing Murder Defendants to Death?
Graphs and Charts

Categorical Variables: Pie Charts
▶ Data Sources 3.2: The Law Enforcement Management and Administrative Statistics
Survey

Categorical Variables: Bar Graphs
Continuous Variables: Histograms

▶ Research Example 3.2: Are Women’s Violent-Crime Commission Rates Rising?
Continuous Variables: Frequency Polygons
Longitudinal Variables: Line Charts

Grouped Data
▶ Data Sources 3.3: CQ Press’s State Factfinder Series
SPSS
Chapter Summary
Review Problems

Chapter 4 Measures of Central Tendency
The Mode
▶ Research Example 4.1: Are People Convicted of Homicide More Violent in Prison Than
People Convicted of Other Types of Offenses? Do Latino Drug Traffickers’ National
Origin and Immigration Status Affect the Sentences They Receive?
The Median
The Mean
▶ Research Example 4.2: How Do Offenders’ Criminal Trajectories Impact the
Effectiveness or Incarceration? Can Good Parenting Practices Reduce the Criminogenic
Impact of Youths’ Time Spent in Unstructured Activities?
Using the Mean and Median to Determine Distribution Shape
Deviation Scores and the Mean as the Midpoint of the Magnitudes
SPSS
Chapter Summary
Review Problems

Chapter 5 Measures of Dispersion
The Variation Ratio
The Range
The Variance
The Standard Deviation
The Standard Deviation and the Normal Curve
▶ Research Example 5.1: Does the South Have a Culture of Honor That Increases Gun
Violence? Do Neighborhoods With Higher Immigrant Concentrations Experience More
Crime?
▶ Research Example 5.2: Why Does Punishment Often Increase—Rather Than Reduce—

9

Criminal Offending?
SPSS
Chapter Summary
Review Problems

Part II Probability and Distributions
Chapter 6 Probability

Discrete Probability: The Binomial Probability Distribution
▶ Research Example 6.1: Are Police Officers Less Likely to Arrest an Assault Suspect When
the Suspect and the Alleged Victim Are Intimate Partners?

Successes and Sample Size: N and r
The Number of Ways r Can Occur, Given N: The Combination
The Probability of Success and the Probability of Failure: p and q
Putting It All Together: Using the Binomial Coefficient to Construct the Binomial
Probability Distribution

Continuous Probability: The Standard Normal Curve
▶ Research Example 6.2: What Predicts Correctional Officers’ Job Stress and Job
Satisfaction?

The z Table and Area Under the Standard Normal Curve
Chapter Summary
Review Problems

Chapter 7 Population, Sample, and Sampling Distributions
Empirical Distributions: Population and Sample Distributions
Theoretical Distributions: Sampling Distributions
Sample Size and the Sampling Distribution: The z and t Distributions
Chapter Summary
Review Problems

Chapter 8 Point Estimates and Confidence Intervals
The Level of Confidence: The Probability of Being Correct
Confidence Intervals for Means With Large Samples
Confidence Intervals for Means With Small Samples
▶ Research Example 8.1: Do Criminal Trials Retraumatize Victims of Violent Crimes?
▶ Data Sources 8.1: The Firearm Injury Surveillance Study, 1993–2013
Confidence Intervals With Proportions and Percentages
▶ Research Example 8.2: What Factors Influence Repeat Offenders’ Completion of a
“Driving Under the Influence” Court Program? How Extensively Do News Media Stories
Distort Public Perceptions About Racial Minorities’ Criminal Involvement?
▶ Research Example 8.3: Is There a Relationship Between Unintended Pregnancy and
Intimate Partner Violence?
Why Do Suspects Confess to Police?
Chapter Summary

10

Review Problems
Part III Hypothesis Testing

Chapter 9 Hypothesis Testing: A Conceptual Introduction
Sample Statistics and Population Parameters: Sampling Error or True Difference?
Null and Alternative Hypotheses
Chapter Summary
Review Problems

Chapter 10 Hypothesis Testing With Two Categorical Variables: Chi-Square
▶ Research Example 10.1: How Do Criminologists’ and Criminal Justice Researchers’
Attitudes About the Criminal Justice System Compare to the Public’s Attitudes?
Conceptual Basis of the Chi-Square Test: Statistical Dependence and Independence
The Chi-Square Test of Independence
▶ Research Example 10.2: Do Victim or Offender Race Influence the Probability That a
Homicide Will Be Cleared and That a Case Will Be Tried as Death-Eligible?
Measures of Association
SPSS
Chapter Summary
Review Problems

Chapter 11 Hypothesis Testing With Two Population Means or Proportions
▶ Research Example 11.1: Do Multiple Homicide Offenders Specialize in Killing?
Two-Population Tests for Differences Between Means: t Tests

Independent-Samples t Tests
▶ Data Sources 11.1: Juvenile Defendants in Criminal Courts

Dependent-Samples t Tests
▶ Research Example 11.2: Do Mentally Ill Offenders’ Crimes Cost More?
▶ Research Example 11.3: Do Targeted Interventions Reduce Crime?
Two-Population Tests for Differences Between Proportions
▶ Research Example 11.4: Does the Gender Gap in Offending Rates Differ Between Male
and Female Drug Abusers?
SPSS
Chapter Summary
Review Problems

Chapter 12 Hypothesis Testing With Three or More Population Means: Analysis of Variance
ANOVA: Different Types of Variances
▶ Research Example 12.1: Do Asian Defendants Benefit From a “Model Minority”
Stereotype?
▶ Research Example 12.2: Are Juveniles Who Are Transferred to Adult Courts Seen as
More Threatening?
When the Null Is Rejected: A Measure of Association and Post Hoc Tests
▶ Research Example 12.3: Does Crime Vary Spatially and Temporally in Accordance With

11

Routine Activities Theory?
SPSS
Chapter Summary
Review Problems

Chapter 13 Hypothesis Testing With Two Continuous Variables: Correlation
▶ Research Example 13.1: Part 1: Is Perceived Risk of Internet Fraud Victimization Related
to Online Purchases?
▶ Research Example 13.2: Do Prisoners’ Criminal Thinking Patterns Predict Misconduct?
Do Good Recruits Make Good Cops?
Beyond Statistical Significance: Sign, Magnitude, and Coefficient of Determination
SPSS
▶ Research Example 13.1, Continued: Part 2: Is Perceived Risk of Internet Fraud
Victimization Related to Online Purchases?
Chapter Summary
Review Problems

Chapter 14 Introduction to Regression Analysis
One Independent Variable and One Dependent Variable: Bivariate Regression

Inferential Regression Analysis: Testing for the Significance of b
Beyond Statistical Significance: How Well Does the Independent Variable Perform as
a Predictor of the Dependent Variable?
Standardized Slope Coefficients: Beta Weights
The Quality of Prediction: The Coefficient of Determination

Adding More Independent Variables: Multiple Regression
▶ Research Example 14.1: Does Childhood Intelligence Predict the Emergence of Self-
Control?
Ordinary Least Squares Regression in SPSS
▶ Research Example 14.2: Does Having a Close Black Friend Reduce Whites’ Concerns
▶ Research Example 14.3: Do Multiple Homicide Offenders Specialize in Killing?
Alternatives to Ordinary Least Squares Regression
▶ Research Example 14.4: Is Police Academy Performance a Predictor of Effectiveness on
the Job?
Chapter Summary
Review Problems

Appendix A Review of Basic Mathematical Techniques
Appendix B Standard Normal (z) Distribution
Appendix C t Distribution
Appendix D Chi-Square (c²) Distribution
Appendix E F Distribution
Glossary

12

References
Index

13

Preface to the Third Edition

In 2002, James Comey, the newly appointed U.S. attorney for the Southern District of New York who would
later become the director of the Federal Bureau of Investigation, entered a room filled with high-powered
criminal prosecutors. He asked the members of the group to raise their hands if they had never lost a case.
Proud, eager prosecutors across the room threw their hands into the air, expecting a pat on the back. Comey’s
response befuddled them. Instead of praising them, he called them chickens (that is not quite the term he
used, but close enough) and told them the only reason they had never lost is that the cases they selected to

prosecute were too easy.1 The group was startled at the rebuke, but they really should not have been. Numbers
can take on various meanings and interpretations and are sometimes used in ways that conceal useful
information rather than revealing it.

1. Eisinger, J. (2017). The chickens**t club: Why the Justice Department fails to prosecute executives. New York:
Simon & Schuster.

This book enters its third edition at a time when the demand for an educated, knowledgeable workforce has
never been greater. This is as true in criminal justice and criminology as in any other university major and
occupational field. Education is the hallmark of a professional. Education is not just about knowing facts,
though—it is about thinking critically and treating incoming information with a healthy dose of skepticism.
All information must pass certain tests before being treated as true. Even if it passes those tests, the possibility
remains that additional information exists that, if discovered, would alter our understanding of the world.
People who critically examine the trustworthiness of information and are open to new knowledge that
challenges their preexisting notions about what is true and false are actively using their education, rather than
merely possessing it.

At first glance, statistics seems like a topic of dubious relevance to everyday life. Convincing criminology and
criminal justice students that they should care about statistics is no small task. Most students approach the
class with apprehension because math is daunting, but many also express frustration and impatience. The
thought, “But I’m going to be a [police officer, lawyer, federal agent, etc.], so what do I need this class for?” is
on many students’ minds as they walk through the door or log in to the learning management system on the
first day. The answer is surprisingly simple: Statistics form a fundamental part of what we know about the
world. Practitioners in the criminal justice field rely on statistics. A police chief who alters a department’s
deployment plan so as to allocate resources to crime hot spots trusts that the researchers who analyzed the
spatial distribution of crime did so correctly. A prison warden seeking to classify inmates according to the risk
they pose to staff and other inmates needs assessment instruments that accurately predict each person’s
likelihood of engaging in behavior that threatens institutional security. A chief prosecutor must recognize that
a high conviction rate might not be testament to assistant prosecutors’ skill level but, rather, evidence that they
only try simple cases and never take on challenges.

Statistics matter because what unites all practitioners in the criminology and criminal justice occupations and

14

professions is the need for valid, reliable data and the ability to critically examine numbers that are set before
them. Students with aspirations for graduate school have to understand statistical concepts because they will

be expected to produce knowledge using these techniques. Those planning to enter the workforce as
practitioners must be equipped with the background necessary to appraise incoming information and evaluate
its accuracy and usefulness. Statistics, therefore, is just as important to information consumers as it is to
producers.

The third edition of Statistics for Criminology and Criminal Justice, like its two predecessors, balances quantity
and complexity with user-friendliness. A book that skimps on information can be as confusing as one
overloaded with it. The sacrificed details frequently pertain to the underlying theory and logic that drive
statistical analyses. The pedagogical techniques employed in this text draw from the scholarship of teaching
and learning, wherein researchers have demonstrated that students learn best when they understand logical
connections within and across concepts, rather than merely memorizing key terms or steps to solving
equations. In statistics, students are at an advantage if they first understand the overarching goal of the
techniques they are learning before they begin working with formulas and numbers.

This book also emphasizes the application of new knowledge. Students can follow along in the step-by-step
instructions that illustrate plugging numbers into formulas and solving them. Additional practice examples are
embedded within the chapters, and chapter review problems allow students to test themselves (the answers to
the odd-numbered problems are located in the back of the book), as well as offering instructors convenient
homework templates using the even-numbered questions.

Real data and research also further the goal of encouraging students to apply concepts and showing them the
relevance of statistics to practical problems in the criminal justice and criminology fields. Chapters contain
Data Sources boxes that describe some common, publicly available data sets such as the Uniform Crime
Reports, National Crime Victimization Survey, General Social Survey, and others. Most in-text examples and
end-of-chapter review problems use data drawn from the sources highlighted in the book. The goal is to lend
a practical, tangible bent to this often-abstract topic. Students get to work with the data their professors use.
They get to see how elegant statistics can be at times and how messy they can be at others, how analyses can
sometimes lead to clear conclusions and other times to ambiguity.

The Research Example boxes embedded throughout the chapters illustrate criminal justice and criminology
research in action and are meant to stimulate students’ interest. They highlight that even though the math
might not be exciting, the act of scientific inquiry most definitely is, and the results have important
implications for policy and practice. In the third edition, the examples have been expanded to include
additional contemporary criminal justice and criminology studies. Most of the examples contained in the first
and second editions were retained in order to enhance diversity and allow students to see firsthand the rich
variety of research that has been taking place over time. The full texts of all articles are available on the SAGE
companion site (http://www.sagepub.com/gau) and can be downloaded online by users with institutional

This edition retains the Learning Check boxes. These are scattered throughout the text and function as mini-

15

http://www.sagepub.com/gau

quizzes that test students’ comprehension of certain concepts. They are short so that students can complete
them without disrupting their learning process. Students can use each Learning Check to make sure they are on
the right track in their understanding of the material, and instructors can use them for in-class discussion. The
answer key is in the back of the book.

Where relevant to the subject matter, chapters end with a section on IBM® SPSS® Statistics2 and come with
one or more shortened versions of a major data set in SPSS file format. Students can download these data sets
to answer the review questions presented at the end of the chapter. The full data sets are all available from the
Inter-University Consortium for Political and Social Research at www.icpsr.umich.edu/icpsrweb/ICPSR/ and
other websites as reported in the text. If desired, instructors can download the original data sets to create
supplementary examples and practice problems for hand calculations or SPSS analyses.

The third edition features the debut of Thinking Critically sections. These two-question sections appear at the
end of each chapter. The questions are open-ended and designed to inspire students to think about the
nuances of science and statistics. Instructors can assign them as homework problems or use them to initiate
class debates.

The book is presented in three parts. Part I covers descriptive statistics. It starts with the basics of levels of
measurement and moves on to frequency distributions, graphs and charts, and proportions and percentages.
Students learn how to select the correct type(s) of data display based on a variable’s level of measurement and
then construct that diagram or table. They then learn about measures of central tendency and measures of
dispersion and variability. These chapters also introduce the normal curve.

Part II focuses on probability theory and sampling distributions. This part lays out the logic that forms the
basis of hypothesis testing. It emphasizes the variability in sample statistics that precludes direct inference to
population parameters. Part II ends with confidence intervals, which is students’ first foray into inferential
statistics.

Part III begins with an introduction to bivariate hypothesis testing. The intention is to ease students into
inferential tests by explaining what these tests do and what they are for. This helps transition students from
the theoretical concepts covered in Part II to the application of those logical principles. The remaining
chapters include chi-square tests, t tests and tests for differences between proportions, analysis of variance
(ANOVA), correlation, and ordinary least squares (OLS) regression. The sequence is designed such that
some topics flow logically into others. Chi-square tests are presented first because they are the only
nonparametric test type covered here. Two-population t tests then segue into ANOVA. Correlation, likewise,
supplies the groundwork for regression. Bivariate regression advances from correlation and transitions into the
multivariate framework. The book ends with the fundamentals of interpreting OLS regression models.

This book provides the foundation for a successful statistics course that combines theory, research, and
practical application for a holistic, effective approach to teaching and learning. Students will exit the course
ready to put their education into action as they prepare to enter their chosen occupation, be that in academia,

16

http://www.icpsr.umich.edu/icpsrweb/ICPSR/

law, or the field. Learning statistics is not a painless process, but the hardest classes are the ones with the
greatest potential to leave lasting impressions. Students will meet obstacles, struggle with them, and ultimately
surmount them so that in the end, they will look back and say that the challenge was worth it.

17

Acknowledgments

The third edition of this book came about with input and assistance from multiple people. With regard to the
development and preparation of this manuscript, I wish to thank Jessica Miller and the staff at SAGE for
their support and encouragement, as well as Alison Hope for her excellent copyediting assistance. You guys
are the best! I owe gratitude to my family and friends who graciously tolerate me when I am in “stats mode”
and a tad antisocial. Numerous reviewers supplied advice, recommendations, and critiques that helped shape
this book. Reviewers for the third edition are listed in alphabetical order here. Of course, any errors contained
in this text are mine alone.

Calli M. Cain, University of Nebraska at Omaha
Kyleigh Clark, University of Massachusetts, Lowell
Jane C. Daquin, Georgia State University
Courtney Feldscher, University of Massachusetts Boston
Albert M. Kopak, Western Carolina University
Bonny Mhlanga, Western Illinois University
Elias Nader, University of Massachusetts Lowell
Tyler J. Vaughan, Texas State University
Egbert Zavala, The University of Texas at El Paso

Reviewers for the second edition:

Jeb A. Booth, Salem State University
Ayana Conway, Virginia State University
Matthew D. Fetzer, Shippensburg University
Anthony W. Hoskin, University of Texas of the Permian Basin
Shelly A. McGrath, University of Alabama at Birmingham
Bonny Mhlanga, Western Illinois University
Carlos E. Posadas, New Mexico State University
Scott Senjo, Weber State University
Nicole L. Smolter, California State University, Los Angeles
Brian Stults, Florida State University
George Thomas, Albany State University

18

Jacinta M. Gau, Ph.D.,
is an associate professor in the Department of Criminal Justice at the University of Central Florida. She
received her Ph.D. from Washington State University. Her primary areas of research are policing and
criminal justice policy, and she has a strong quantitative background. Dr. Gau’s work has appeared in
journals such as Justice Quarterly, British Journal of Criminology, Criminal Justice and Behavior, Crime &
Delinquency, Criminology & Public Policy, Police Quarterly, Policing: An International Journal of Police
Strategies & Management, and the Journal of Criminal Justice Education. In addition to Statistics for
Criminology and Criminal Justice, she is author of Criminal Justice Policy: Origins and Effectiveness (Oxford
University Press) and coauthor of Key Ideas in Criminology and Criminal Justice (SAGE). Additionally,

19

Part I Descriptive Statistics

Chapter 1 Introduction to the Use of Statistics in Criminal Justice and Criminology
Chapter 2 Types of Variables and Levels of Measurement
Chapter 3 Organizing, Displaying, and Presenting Data
Chapter 4 Measures of Central Tendency
Chapter 5 Measures of Dispersion

20

Chapter 1 Introduction to the Use of Statistics in Criminal Justice
and Criminology

21

Learning Objectives
Explain how data collected using scientific methods are different from anecdotes and other nonscientific information.
List and describe the types of research in criminal justice and criminology.
Explain the difference between the research methods and statistical analysis.
Define samples and populations.
Describe probability sampling.
List and describe the three major statistics software packages.

You might be thinking, “What do statistics have to do with criminal justice or criminology?” It is reasonable
for you to question the requirement that you spend an entire term poring over a book about statistics instead
of one about policing, courts, corrections, or criminological theory. Many criminology and criminal justice
undergraduates wonder, “Why am I here?” In this context, the question is not so much existential as it is
practical. Luckily, the answer is equally practical.

You are “here” (in a statistics course) because the answer to the question of what statistics have to do with
criminal justice and criminology is “Everything!” Statistical methods are the backbone of criminal justice and
criminology as fields of scientific inquiry. Statistics enable the construction and expansion of knowledge about
criminality and the criminal justice system. Research that tests theories or examines criminal justice
phenomena and is published in academic journals and books is the basis for most of what we know about
criminal offending and the system that has been designed to deal with it. The majority of this research would
not be possible without statistics.

Statistics can be abstract, so this book uses two techniques to add a realistic, pragmatic dimension to the
subject. The first technique is the use of examples of statistics in criminal justice and criminology research.
These summaries are contained in the Research Example boxes embedded in each chapter. They are meant to
give you a glimpse into the types of questions that are asked in this field of research and the ways in which
specific statistical techniques are used to answer those questions. You will see firsthand how lively and diverse
criminal justice and criminology research is. Research Example 1.1 summarizes seven studies. Take a moment

The second technique to add a realistic, pragmatic dimension to the subject of this book is the use of real data
from reputable and widely used sources such as the Bureau of Justice Statistics (BJS). The BJS is housed
within the U.S. Department of Justice and is responsible for gathering, maintaining, and analyzing data on
various criminal justice topics at the county, state, and national levels. Visit http://bjs.ojp.usdoj.gov/ to
familiarize yourself with the BJS. The purpose behind the use of real data is to give you the type of hands-on
experience that you cannot get from fictional numbers. You will come away from this book having worked
with some of the same data that criminal justice and criminology researchers use. Two sources of data that will
be used in upcoming chapters are the Uniform Crime Reports (UCR) and the National Crime Victimization
Survey (NCVS). See Data Sources 1.1 and 1.2 for information about these commonly used measures of
criminal incidents and victimization, respectively. All the data sets used in this book are publicly available and

22

http://bjs.ojp.usdoj.gov/

were downloaded from governmental websites and the archive maintained by the Inter-University
Consortium for Political and Social Research at www.icpsr.umich.edu.

Research Example 1.1 What Do Criminal Justice and Criminology Researchers Study?

Researchers in the field of criminology and criminal justice examine a wide variety of issues pertaining to the criminal justice system
and theories of offending. Included are topics such as prosecutorial charging decisions, racial and gender disparities in sentencing,
police use of force, drug and domestic violence courts, and recidivism. The following are examples of studies that have been
conducted and published. You can find the full text of each of these articles and of all those presented in the following chapters at
(www.sagepub.com/gau).

1. Can an anticrime strategy that has been effective at reducing certain types of violence also be used to combat open-air drug markets?
The “pulling levers” approach involves deterring repeat offenders from crime by targeting them for enhanced prosecution
while also encouraging them to change their behavior by offering them access to social services. This strategy has been shown
to hold promise with gang members and others at risk for committing violence. The Rockford (Illinois) Police Department
(RPD) decided to find out if they could use a pulling levers approach to tackle open-air drug markets and the crime problems
caused by these nuisance areas. After the RPD implemented the pulling levers intervention, Corsaro, Brunson, and
McGarrell (2013) used official crime data from before and after the intervention to determine whether this approach had
been effective. They found that although there was no reduction in violent crime, nonviolent crime (e.g., drug offenses,
vandalism, and disorderly conduct) declined noticeably after the intervention. This indicated that the RPD’s efforts had
worked, because drug and disorder offenses were exactly what the police were trying to reduce.

2. Are prisoners with low self-control at heightened risk of victimizing, or being victimized by, other inmates? Research has
consistently shown that low self-control is related to criminal offending. Some studies have also indicated that this trait is a
risk factor for victimization, in that people with low self-control might place themselves in dangerous situations. One of the
central tenets of this theory is that self-control is stable and acts in a uniform manner regardless of context. Kerley,
Hochstetler, and Copes (2009) tested this theory by examining whether the link between self-control and both offending and
victimization held true within the prison environment. Using data gathered from surveys of prison inmates, the researchers
discovered that low self-control was only slightly related to in-prison offending and victimization. This result could challenge
the assumption that low self-control operates uniformly in all contexts. To the contrary, something about prisoners
themselves, the prison environment, or the interaction between the two might change the dynamics of low self-control.

3. Does school racial composition affect how severely schools punish black and Latino students relative to white ones? Debates about the
so-called school-to-prison pipeline emphasize the long-term effects of school disciplinary actions such as suspension,
expulsion, and arrest or court referral. Youth who experience these negative outcomes are at elevated risk for dropping out of
school and getting involved in delinquency and, eventually, crime. Hughes, Warren, Stewart, Tomaskovic-Devey, and Mears
(2017) set out to discover whether schools’ and school boards’ racial composition affects the treatment of black, Latino, and
white students. The researchers drew from two theoretical perspectives: The racial threat perspective argues that minorities
are at higher risk for punitive sanctions when minority populations are higher, because whites could perceive minority groups
as a threat to their place in society. On the other hand, the intergroup contact perspective suggests that racial and ethnic
diversity reduces the harshness of sanctions for minorities, because having contact with members of other racial and ethnic
groups diminishes prejudice. Hughes and colleagues used data from the Florida Department of Education, the U.S. Census
Bureau, and the Uniform Crime Reports. Statistical results provided support for both perspectives. Increases in the size of
the black and Hispanic student populations led to higher rates of suspension for students of these groups. On the other hand,
intergroup contact among school board members of different races reduced suspensions for all students. The researchers
concluded that interracial contact among school board members equalized disciplinary practices and reduced discriminatory
disciplinary practices.

4. What factors influence police agencies’ ability to identify and investigate human trafficking? Human trafficking has been
recognized as a transnational crisis. Frequently, local police are the first ones who encounter victims or notice signs
suggesting the presence of trafficking. In the United States, however, many local police agencies do not devote systematic
attention to methods that would enable them to detect and investigate suspected traffickers. Farrell (2014) sought to learn
more about U.S. police agencies’ antitrafficking efforts. Using data from two national surveys of medium-to-large municipal
police departments, Farrell found that 40% of departments trained their personnel on human trafficking, 17% had written
policies pertaining to this crime, and 13% dedicated personnel to it. Twenty-eight percent had investigated at least one

23

http://www.icpsr.umich.edu

http://www.sagepub.com/gau

trafficking incident in the previous six years. Larger departments were more likely to have formalized responses (training,
policies, and dedicated personnel), and departments that instituted these responses were more likely to have engaged in
trafficking investigations. These results show a need for departments to continue improving their antitrafficking efforts.
Departments that are more responsive to local problems and more open to change will be more effective at combating this
crime.

5. How safe and effective are conducted energy devices as used by police officers? Conducted energy devices (CEDs) have proliferated
in recent years. Their widespread use and the occasional high-profile instances of misuse have generated controversy over
whether these devices are safe for suspects and officers alike. Paoline, Terrill, and Ingram (2012) collected use-of-force data
from six police agencies nationwide and attempted to determine whether officers who deployed CEDs against suspects were
more or less likely to sustain injuries themselves. The authors’ statistical analysis suggested a lower probability of officer
injury when only CEDs were used. When CEDs were used in combination with other forms of force, however, the
probability of officer injury increased. The results suggest that CEDs can enhance officer safety, but they are not a panacea
that uniformly protects officers in all situations.

6. How prevalent is victim precipitation in intimate partner violence? A substantial number of violent crimes are initiated by the
person who ultimately becomes the victim in an incident. Muftić, Bouffard, and Bouffard (2007) explored the role of victim
precipitation in instances of intimate partner violence (IPV). They gleaned data from IPV arrest reports and found that
victim precipitation was present in cases of both male and female arrestees but that it was slightly more common in instances
where the woman was the one arrested. This suggests that some women (and, indeed, some men) arrested for IPV might be
responding to violence initiated by their partners rather than themselves being the original aggressors. The researchers also
discovered that victim precipitation was a large driving force behind dual arrests (cases in which both parties are arrested),
because police could either see clearly that both parties were at fault or, alternatively, were unable to determine which party
was the primary aggressor. Victim precipitation and the use of dual arrests, then, could be contributing factors behind the
recent rise in the number of women arrested for IPV against male partners.

7. What are the risk factors in a confrontational arrest that are most commonly associated with the death of the suspect? There have been
several high-profile instances of suspects dying during physical confrontations with police wherein the officers deployed
CEDs against these suspects. White and colleagues (2013) collected data on arrest-related deaths (ARDs) that involved
CEDs and gained media attention. The researchers triangulated the data using information from medical-examiner reports.
They found that in ARDs, suspects were often intoxicated and extremely physically combative with police. Officers, for their
part, had used several other types of force before or after trying to solve the situation using CEDs. Medical examiners most
frequently attributed these deaths to drugs, heart problems, and excited delirium. These results suggest that police
departments should craft policies to guide officers’ use of CEDs against suspects who are physically and mentally
incapacitated.

In this book, emphasis is placed on both the production and interpretation of statistics. Every statistical
analysis has a producer (someone who runs the analysis) and a consumer (someone to whom an analysis is
being presented). Regardless of which role you play in any given situation, it is vital for you to be sufficiently
versed in quantitative methods that you can identify the proper statistical technique and correctly interpret the
results. When you are in the consumer role, you must also be ready to question the methods used by the
producer so that you can determine for yourself how trustworthy the results are. Critical thinking skills are an
enormous component of statistics. You are not a blank slate standing idly by, waiting to be written on—you
are an active agent in your acquisition of knowledge about criminal justice, criminology, and the world in

Data Sources 1.1 The Uniform Crime Reports

The Federal Bureau of Investigation (FBI) collects annual data on crimes reported to police agencies nationwide and maintains the
UCR. Crimes are sorted into eight index offenses: homicide, rape, robbery, aggravated assault, burglary, larceny-theft, motor vehicle
theft, and arson. An important aspect of this data set is that it includes only those crimes that come to the attention of police—
crimes that are not reported or otherwise detected by police are not counted. The UCR also conforms to the hierarchy rule, which

24

mandates that in multiple-crime incidents only the most serious offense ends up in the UCR. If, for example, someone breaks into a
residence with intent to commit a crime inside the dwelling and while there, kills the homeowner and then sets fire to the structure
to hide the crime, he has committed burglary, murder, and arson. Because of the hierarchy rule, though, only the murder would be
reported to the FBI—it would be as if the burglary and arson had never occurred. Because of underreporting by victims and the
hierarchy rule, the UCR undercounts the amount of crime in the United States. It nonetheless offers valuable information and is
widely used. You can explore this data source at www.fbi.gov/about-us/cjis/ucr/ucr.

Data Sources 1.2 The National Crime Victimization Survey

The U.S. Census Bureau conducts the periodic NCVS under the auspices of the BJS to estimate the number of criminal incidents
that transpire each year and to collect information about crime victims. Multistage cluster sampling is used to select a random sample
of households, and each member of that household who is 12 years or older is asked to participate in an interview. Those who agree
to be interviewed are asked over the phone or in person about any and all criminal victimizations that transpired in the 6 months
prior to the interview. The survey employs a rotating panel design, so respondents are called at 6-month intervals for a total of 3
years, and then new respondents are selected (BJS, 2006). The benefit of the NCVS over the UCR is that NCVS respondents might
disclose victimizations to interviewers that they did not report to police, thus making the NCVS a better estimation of the total
volume of crime in the United States. The NCVS, though, suffers from the weakness of being based entirely on victims’ memory
and honesty about the timing and circumstances surrounding criminal incidents. The NCVS also excludes children younger than 12
years, institutionalized populations (e.g., persons in prisons, nursing homes, and hospitals), and the homeless. Despite these
problems, the NCVS is useful because it facilitates research into the characteristics of crime victims. The 2015 wave of the NCVS is

25

Science: Basic Terms and Concepts

There are a few terms and concepts that you must know before you get into the substance of the book.
Statistics are a tool in the larger enterprise of scientific inquiry. Science is the process of systematically
collecting reliable information and developing knowledge using techniques and procedures that are accepted
by other scientists in a discipline. Science is grounded in methods—research results are trustworthy only when
the procedures used to arrive at them are considered correct by experts in the scientific community.
Nonscientific information is that which is collected informally or without regard for correct methods.
Anecdotes are a form of nonscientific information. If you ask one person why he or she committed a crime,
that person’s response will be an anecdote; it cannot be assumed to be broadly true of other offenders. If you
use scientific methods to gather a large group of offenders and you survey all of them about their motivations,
you will have data that you can analyze using statistics and that can be used to draw general conclusions.

Science: The process of gathering and analyzing data in a systematic and controlled way using procedures that are generally accepted
by others in the discipline.

Methods: The procedures used to gather and analyze scientific data.

In scientific research, samples are drawn from populations using scientific techniques designed to ensure that
samples are representative of populations. For instance, if the population is 50% male, then the sample should
also be approximately 50% male. A sample that is only 15% male is not representative of the population.
Research-methods courses instruct students on the proper ways to gather representative samples. In a statistics
course, the focus is on techniques used to analyze the data to look for patterns and test for relationships.
Together, proper methods of gathering and analyzing data form the groundwork for scientific inquiry. If there
is a flaw in either the gathering or the analyzing of data, then the results might not be trustworthy. Garbage
in, garbage out (GIGO) is the mantra of statistics. Data gathered with the best of methods can be rendered
worthless if the wrong statistical analysis is applied to them; likewise, the most sophisticated, cutting-edge
statistical technique cannot salvage improperly collected data. When the data or the statistics are defective, the
results are likewise deficient and cannot be trusted. Studies using unscientific data or flawed statistical analyses
do not contribute to theory and research or to policy and practice because their findings are unreliable and
could be erroneous.

Sample: A subset pulled from a population with the goal of ultimately using the people, objects, or places in the sample as a way to
generalize to the population.

Population: The universe of people, objects, or locations that researchers wish to study. These groups are often very large.

26

Learning Check 1.1

Identify whether each of the following is a sample or a population.

1. A group of 100 police officers pulled from a department with 300 total officers
2. Fifty prisons selected at random from all prisons nationwide
3. All persons residing in the state of Wisconsin
4. A selection of 10% of the defendants processed through a local criminal court in 1 year

Everybody who conducts a study has an obligation to be clear and open about the methods they used. You
should expect detailed reports on the procedures used so that you can evaluate whether they followed proper
scientific methods. When the methods used to collect and analyze data are sound, it is not appropriate to
question scientific results on the basis of a moral, emotional, or opinionated objection to them. On the other
hand, it is entirely correct (and is necessary, in fact) to question results when methodological or statistical
procedures are shoddy or inadequate. Remember GIGO!

A key aspect of science is the importance of replication. No single study ever proves something definitively;
quite to the contrary, much testing must be done before firm conclusions can be drawn. Replication is
important because there are times when a study is flawed and needs to be redone or when the original study is
methodologically sound but needs to be tested on new populations and samples. For example, a correctional
treatment program that reduces recidivism rates among adults might or might not have similar positive results
with juveniles. Replicating the treatment and evaluation with a sample of juvenile offenders would provide
information about whether the program is helpful to both adults and juveniles or is only appropriate for
adults. The scientific method’s requirement that all researchers divulge the steps they took to gather and
analyze data allows other researchers and members of the public to examine those steps and, when warranted,
to undertake replications.

Replication: The repetition of a particular study that is conducted for purposes of determining whether the original study’s results
hold when new samples or measures are employed.

27

Types of Scientific Research in Criminal Justice and Criminology

Criminal justice and criminology research is diverse in nature and purpose. Much of it involves theory testing.
Theories are proposed explanations for certain events. Hypotheses are small “pieces” of theories that must be
true in order for the entire theory to hold up. You can think of a theory as a chain and hypotheses as the links
forming that chain. Research Example 1.1 discusses a test of the general theory of crime conducted by Kerley
et al. (2009). The general theory holds that low self-control is a static predictor of offending and
victimization, regardless of context. From this proposition, the researchers deduced the hypothesis that the
relationship between low self-control and both offending and victimization must hold true in the prison
environment. Their results showed an overall lack of support for the hypothesis that low self-control operates
uniformly in all contexts, thus calling that aspect of the general theory of crime into question. This is an
example of a study designed to test a theory.

Theory: A set of proposed and testable explanations about reality that are bound together by logic and evidence.

Hypothesis: A single proposition, deduced from a theory, that must hold true in order for the theory itself to be considered valid.

Evaluation research is also common in criminal justice and criminology. In Research Example 1.1, the article
by Corsaro et al. (2013) is an example of evaluation research. This type of study is undertaken when a new
policy, program, or intervention is put into place and researchers want to know whether the intervention
accomplished its intended purpose. In this study, the RPD implemented a pulling levers approach to combat
drug and nuisance offending. After the program had been put into place, the researchers analyzed crime data
to find out whether the approach was effective.

Evaluation research: Studies intended to assess the results of programs or interventions for purposes of discovering whether those
programs or interventions appear to be effective.

Exploratory research occurs when there is limited knowledge about a certain phenomenon; researchers
essentially embark into unfamiliar territory when they attempt to study this social event. The study by Muftić
et al. (2007) in Research Example 1.1 was exploratory in nature because so little is known about victim
precipitation, particularly in the realm of IPV. It is often dangerous to venture into new areas of study when
the theoretical guidance is spotty; however, exploratory studies have the potential to open new areas of
research that have been neglected but that provide rich information that expands the overall body of
knowledge.

Exploratory research: Studies that address issues that have not been examined much or at all in prior research and that therefore
might lack firm theoretical and empirical grounding.

Finally, some research is descriptive in nature. White et al.’s (2013) analysis of CED-involved deaths
illustrates a descriptive study. White and colleagues did not set out to test a theory or to explore a new area of
research—they merely offered basic descriptive information about the suspects, officers, and situations
involved in instances where CED use was associated with a suspect’s death. In descriptive research, no

28

generalizations are made to larger groups; the conclusions drawn from these studies are specific to the objects,

events, or people being analyzed. This type of research can be very informative when knowledge about a
particular phenomenon is scant.

Descriptive research: Studies done solely for the purpose of describing a particular phenomenon as it occurs in a sample.

With the exception of purely descriptive research, the ultimate goal in most statistical analyses is to generalize
from a sample to a population. A population is the entire set of people, places, or objects that a researcher
wishes to study. Populations, though, are usually very large. Consider, for instance, a researcher trying to
estimate attitudes about capital punishment in the general U.S. population. That is a population of more than
300 million! It would be impossible to measure everyone directly. Researchers thus draw samples from
populations and study the samples instead. Probability sampling helps ensure that a sample mirrors the
population from which it was drawn (e.g., a sample of people should contain a breakdown of race, gender, and
age similar to that found in the population). Samples are smaller than populations, and researchers are
therefore able to measure and analyze them. The results found in the sample are then generalized to the
population.

Probability sampling: A sampling technique in which all people, objects, or areas in a population have a known chance of being
selected into the sample.

29

Learning Check 1.2

For each of the following scenarios, identify the type of research being conducted.

1. A researcher wants to know more about female serial killers. He gathers news articles that report on female serial killers and
records information about each killer’s life history and the type of victim she preyed on.

2. A researcher wants to know whether a new in-prison treatment program is effective at reducing recidivism. She collects a sample
of inmates that participated in the program and a sample that did not go through the program. She then gathers recidivism data
for each group to see if those who participated had lower recidivism rates than those who did not.

3. The theory of collective efficacy predicts that social ties between neighbors, coupled with neighbors’ willingness to intervene when
a disorderly or criminal event occurs in the area, protect the area from violent crime. A researcher gathers a sample of
neighborhoods and records the level of collective efficacy and violent crime in each one to determine whether those with higher
collective efficacy have lower crime rates.

4. A researcher notes that relatively little research has been conducted on the possible effects of military service on later crime
commission. She collects a sample of people who served in the military and a sample of people that did not and compares them to
determine whether the military group differs from the nonmilitary group in terms of the numbers or types of crimes committed.

30

Software Packages for Statistical Analysis

Hand computations are the foundation of this book because seeing the numbers and working with the
formulas facilitates an understanding of statistical analyses. In the real world, however, statistical analysis is
generally conducted using a software program. Microsoft Excel contains some rudimentary statistical
functions and is commonly used in situations requiring only basic descriptive analyses; however, this program’s
usefulness is exhausted quickly because researchers usually want far more than descriptives. Many statistical
packages are available. The most common in criminal justice and criminology research are SPSS, Stata, and
SAS. Each of these packages has strengths and weaknesses. Simplicity and ease of use makes SPSS a good
place to start for people new to statistical analysis. Stata is a powerful program excellent for regression
modeling. The SAS package is the best one for extremely large data sets.

This book incorporates SPSS into each chapter. This allows you to get a sense for what data look like when
displayed in their raw format and permits you to run particular analyses and read and interpret program
output. Where relevant, the chapters offer SPSS practice problems and accompanying data sets that are
that criminal justice and criminology researchers use statistics.

31

http://www.sagepub.com/gau

Organization of the Book

This book is divided into three parts. Part I (Chapters 1 through 5) covers descriptive statistics. Chapter 2
provides a basic overview of types of variables and levels of measurement. Some of this material will be review
for students who have taken a methods course. Chapter 3 delves into charts and graphs as means of
graphically displaying data. Measures of central tendency are the topic of Chapter 4. These are descriptive
statistics that let you get a feel for where the data are clustered. Chapter 5 discusses measures of dispersion.
Measures of dispersion complement measures of central tendency by offering information about whether the
data tend to cluster tightly around the center or, conversely, whether they are very spread out.

Part II (Chapters 6 through 8) describes the theoretical basis for statistics in criminal justice and criminology:
probability and probability distributions. Part I of the book can be thought of as the nuts-and-bolts of the
mathematical concepts used in statistics, and Part II can be seen as the theory behind the math. Chapter 6
introduces probability theory. Binomial and continuous probability distributions are discussed. In Chapter 7,
you will learn about population, sample, and sampling distributions. Chapter 8 provides the book’s first
introduction to inferential statistics with its coverage of point estimates and confidence intervals. The
introduction of inferential statistics at this juncture is designed to help ease you into Part III.

Part III (Chapters 9 through 14) of the book merges the concepts learned in Parts I and II to form the
discussion on inferential hypothesis testing. Chapter 9 offers a conceptual introduction to this framework,
including a description of the five steps of hypothesis testing that will be used in every proceeding chapter. In
Chapter 10, you will encounter your first bivariate statistical technique: chi-square. Chapter 11 describes two-
population t tests and tests for differences between proportions. Chapter 12 covers analysis of variance, which
is an extension of the two-population t test. In Chapter 13, you will learn about correlations. Finally, Chapter
14 wraps up the book with an introduction to bivariate and multiple regression.

The prerequisite that is indispensable to success in this course is a solid background in algebra. You absolutely
must be comfortable with basic techniques such as adding, subtracting, multiplying, and dividing. You also
need to understand the difference between positive and negative numbers. You will be required to plug
numbers into equations and solve those equations. You should not have a problem with this as long as you
remember the lessons you learned in your high school and college algebra courses. Appendix A offers an
overview of the basic mathematical techniques you will need to know, so look those over and make sure that
you are ready to take this course. If necessary, use them to brush up on your skills.

Statistics are cumulative in that many of the concepts you learn at the beginning form the building blocks for
more-complex techniques that you will learn about as the course progresses. Means, proportions, and standard
deviations, for instance, are concepts you will learn about in Part I, but they will remain relevant throughout
the remainder of the book. You must, therefore, learn these fundamental calculations well and you must
remember them.

Repetition is the key to learning statistics. Practice, practice, practice! There is no substitute for doing and

32

redoing the end-of-chapter review problems and any other problems your instructor might provide. You can
also use the in-text examples as problems if you just copy down the numbers and do the calculations on your
own without looking at the book. Remember, even the most advanced statisticians started off knowing
nothing about statistics. Everyone has to go through the learning process. You will complete this process
successfully as long as you have basic algebra skills and are willing to put in the time and effort it takes to
succeed.

Thinking Critically

1. Media outlets and other agencies frequently conduct opinion polls to try to capture information about the public’s thoughts
on contemporary events, controversies, or political candidates. Poll data are faster and easier to collect than survey data are,
because they do not require adherence to scientific sampling methods and questionnaire design. Agencies conducting polls
often do not have the time or resources to engage in full-scale survey projects. Debate the merits of poll data from a policy
standpoint. Is having low-quality information better than having none at all? Or is there no place in public discussions for

2. Suppose you tell a friend that you are taking a statistics course, and your friend reacts with surprise that a criminology or
criminal justice degree program would require students to take this class. Your friend argues that although it is necessary for
people whose careers are dedicated to research to have a good understanding of statistics, this area of knowledge is not useful
for people with practitioner jobs, such as police and corrections officers. Construct a response to this assertion. Identify ways
in which people in practical settings benefit from possessing an understanding of statistical concepts and techniques.

Review Problems

1. Define science and explain the role of methods in the production of scientific knowledge.
2. What is a population? Why are researchers usually unable to study populations directly?
3. What is a sample? Why do researchers draw samples?
4. Explain the role of replication in science.
5. List and briefly describe the different types of research in criminal justice and criminology.
6. Identify three theories that you have encountered in your criminal justice or criminology classes. For each one, write one

hypothesis for which you could collect data in order to test that hypothesis.

classes. For each one, suggest a possible way to evaluate that program’s or policy’s effectiveness.
8. If a researcher were conducting a study on a topic about which very little is known and the researcher does not have theory or

prior evidence to make predictions about what she will find in her study, what kind of research would she be doing? Explain

9. If a researcher were solely interested in finding out more about a particular phenomenon and focused entirely on a sample
without trying to make inference to a population, what kind of research would he be doing? Explain your answer.

10. What does GIGO stand for? What does this cautionary concept mean in the context of statistical analyses?

33

Key Terms

Science 8
Methods 8
Sample 8
Population 9
Replication 9
Theory 10
Hypothesis 10
Evaluation research 10
Exploratory research 10
Descriptive research 10
Probability sampling 11

34

Chapter 2 Types of Variables and Levels of Measurement

35

Learning Objectives
Define variables and constants.
Define unit of analysis and be able to identify the unit of analysis in any given study.
Define independent and dependent variables and be able to identify each in a study.
Explain the difference between empirical associations and causation.
List and describe the four levels of measurement, including similarities and differences between them, and be able to identify the
level of measurement of different variables.

The first thing you must be familiar with in statistics is the concept of a variable. A variable is, quite simply,
something that varies. It is a coding scheme used to measure a particular characteristic of interest. For
instance, asking all of your statistics classmates, “How many classes are you taking this term?” would yield
many different answers. This would be a variable. Variables sit in contrast to constants, which are
characteristics that assume only one value in a sample. It would be pointless for you to ask all your classmates
whether they are taking statistics this term because of course the answer they would all provide is “yes.”

Variable: A characteristic that describes people, objects, or places and takes on multiple values in a sample or population.

Constant: A characteristic that describes people, objects, or places and takes on only one value in a sample or population.

36

Units of Analysis

It seems rather self-evident, but nonetheless bears explicit mention, that every scientific study contains
something that the researcher conducting the study gathers and examines. These “somethings” can be objects
or entities such as rocks, people, molecules, or prisons. This is called the unit of analysis, and it is, essentially,
whatever the sample under study consists of. In criminal justice and criminology research, individual people
are often the units of analysis. These individuals might be probationers, police officers, criminal defendants, or
judges. Prisons, police departments, criminal incidents, or court records can also be units of analysis. Larger
units are also popular; for example, many studies focus on census tracks, block groups, cities, states, or even
countries. Research Example 2.2 describes the methodological setup of a selection of criminal justice studies,
each of which employed a different unit of analysis.

Unit of analysis: The object or target of a research study.

37

Independent Variables and Dependent Variables

Researchers in criminal justice and criminology typically seek to examine relationships between two or more
variables. Observed or empirical phenomena give rise to questions about the underlying forces driving them.
Take homicide as an example. Homicide events and city-level rates are empirical phenomena. It is worthy of
note that Washington, D.C., has a higher homicide rate than Portland, Oregon. Researchers usually want to
do more than merely note empirical findings, however—they want to know why things are the way they are.
They might, then, attempt to identify the criminogenic (crime-producing) factors that are present in
Washington but absent in Portland or, conversely, the protective factors possessed by Portland and lacked by
Washington.

Empirical: Having the qualities of being measurable, observable, or tangible. Empirical phenomena are detectable with senses such as
sight, hearing, or touch.

Research Example 2.1 Choosing Variables for a Study on Police Use of Conducted Energy Devices

Conducted energy devices (CEDs) such as the Taser have garnered national—indeed, international—attention in the past few years.
Police practitioners contend that CEDs are invaluable tools that minimize injuries to both officers and suspects during contentious
confrontations, whereas critics argue that police sometimes use CEDs in situations where such a high level of force is not warranted.
Do police seem to be using CEDs appropriately? Gau, Mosher, and Pratt (2010) addressed this question. They sought to determine
whether suspects’ race or ethnicity influenced the likelihood that police officers would deploy or threaten to deploy CEDs against
those suspects. In an analysis of this sort, it is important to account for other variables that might be related to police use of CEDs or
other types of force; therefore, the researchers included suspects’ age, sex, and resistance level. They also measured officers’ age, sex,
and race. Finally, they included a variable indicating whether it was light or dark outside at the time of the encounter. The
researchers found that police use of CEDs was driven primarily by the type and intensity of suspect resistance but that even
controlling for resistance, Latino suspects faced an elevated probability of having CEDs either drawn or deployed against them.

Research Example 2.2 Units of Analysis

Each of the following studies used a different unit of analysis.

1. Do prison inmates incarcerated in facilities far from their homes commit more misconduct than those housed in facilities closer to home?
Lindsey, Mears, Cochran, Bales, and Stults (2017) used data from the Florida Department of Corrections to find out
whether distally placed inmates (i.e., those sent to facilities far from their homes) engaged in more in-prison misbehavior,
and, if so, whether this effect was particularly pronounced for younger inmates. Individual prisoners were the units of analysis
in this study. The findings revealed a curvilinear relationship between distance and misconduct: Prisoners’ misconduct
increased along with distance up to approximately 350 miles, but then the relationship inverted such that further increases in
distance were associated with less misconduct. As predicted, this pattern was strongest among younger inmates. Visitation
helped offset the negative impact of distance but did not eliminate it. The researchers concluded that family visitation might
have mixed effects on inmates. Inmates might be less inclined to commit misconduct if they fear losing visitation privileges,
but receiving visits might induce embarrassment and shame when their family sees them confined in the prison environment.
This strain, in turn, could prompt them to act out. Those who do not see their families much or at all do not experience this
unpleasant emotional reaction.

2. Is the individual choice to keep a firearm in the home affected by local levels of crime and police strength? Kleck and Kovandzic
(2009), using individual-level data from the General Social Survey (GSS) and city-level data from the FBI, set out to
determine whether city-level homicide rates and the number of police per 100,000 city residents affected GSS respondents’
likelihood of owning a firearm. There were two units of analysis in this study: individuals and cities. The statistical models
indicated that high homicide rates and low police levels both modestly increased the likelihood that a given person would
own a handgun; however, the relationship between city homicide rate and individual gun ownership decreased markedly

38

when the authors controlled for whites’ and other nonblacks’ racist attitudes toward African Americans. It thus appeared that
the homicide–gun ownership relationship was explained in part by the fact that those who harbored racist sentiments against
blacks were more likely to own firearms regardless of the local homicide rate.

3. How consistent are use-of-force policies across police agencies? The U.S. Supreme Court case Graham v. Connor (1989) requires
that police officers use only the amount of force necessary to subdue a resistant suspect; force exceeding that minimum is
considered excessive. The Court left it up to police agencies to establish force policies to guide officers’ use of physical
coercion. Terrill and Paoline (2013) sought to determine what these policies look like and how consistent they are across
agencies. The researchers mailed surveys to a sample of 1,083 municipal police departments and county sheriff offices
nationwide, making the agency the unit of analysis. Results showed that 80% of agencies used a force continuum as part of
their written use-of-force policies, suggesting some predictability in the way in which agencies organize their policies.
However, there was substantial variation in policy restrictiveness and the placement of different techniques and weapons.
Most agencies placed officer presence and verbal commands at the lowest end, and deadly force at the highest, but between
those extremes there was variability in the placement of soft and hard hand tactics, chemical sprays, impact weapons, CEDs,
and other methods commonly used to subdue noncompliant suspects. These findings show how localized force policies are,
and how inconsistent they are across agencies.

4. Does gentrification reduce gang homicide? Gentrification is the process by which distressed inner-city areas are transformed by
an influx of new businesses or higher-income residents. Gentrification advocates argue that the economic boost will revitalize
the area, provide new opportunities, and reduce crime. Is this assertion true? Smith (2014) collected data from 1994 to 2005
on all 342 neighborhoods in Chicago with the intention of determining whether gentrification over time reduces gang-
motivated homicide. Smith measured gentrification in three ways: Recent increases in neighborhood residents’
socioeconomic statuses, increases in coffee shops, and demolition of public housing. The author predicted that the first two
would suppress gang homicide and that the last one would increase it; even though public-housing demolition is supposed to
reduce crime, it can also create turmoil, residential displacement, and conflict among former public-housing residents and
residents of surrounding properties. Smith found support for all three hypotheses. Socioeconomic-status increases were
strongly related to reductions in gang-motivated homicides, coffee-shop presence was weakly related to reductions, and
public-housing demolition was robustly associated with increases. These results suggest that certain forms of gentrification
might be beneficial to troubled inner-city neighborhoods but that demolishing public housing might cause more problems
than it solves, at least in the short term.

Researchers undertaking quantitative studies must specify dependent variables (DVs) and independent
variables (IVs). Dependent variables are the empirical events that a researcher is attempting to explain.
Homicide rates, property crime rates, recidivism among recently released prisoners, and judicial sentencing
decisions are examples of DVs. Researchers seek to identify variables that help predict or explain these events.
Independent variables are factors a researcher believes might affect the DV. It might be predicted, for
instance, that prisoners released into economically and socially distressed neighborhoods and given little
support during the reentry process will recidivate more frequently than those who receive transitional housing
and employment assistance. Different variables—crime rates, for instance—can be used as both IVs and DVs
across different studies. The designation of a certain phenomenon as an IV or a DV depends on the nature of
the research study.

Dependent variable: The phenomenon that a researcher wishes to study, explain, or predict.

Independent variable: A factor or characteristic that is used to try to explain or predict a dependent variable.

39

Relationships Between Variables: A Cautionary Note

It is vital to understand that independent and dependent are not synonymous with cause and effect, respectively.
A particular IV might be related to a certain DV, but this is far from definitive proof that the former is the
cause of the latter. To establish causality, researchers must demonstrate that their studies meet three criteria.
First is temporal ordering, meaning that the IV must occur prior to the DV. It would be illogical, for
instance, to predict that adolescents’ participation in delinquency will impact their gender; conversely, it does
make sense to predict that adolescents’ gender affects the likelihood they will commit delinquent acts. The
second causality requirement is that there be an empirical relationship between the IV and the DV. This is a
basic necessity—it does not make sense to try to delve into the nuances of a nonexistent connection between
two variables. For example, if a researcher predicts that people living in high-crime areas are more likely to
own handguns for self-protection, but then finds no relationship between neighborhood-level crime rates and
handgun ownership, the study cannot proceed.

Temporal ordering: The causality requirement holding that an independent variable must precede a dependent variable.

Empirical relationship: The causality requirement holding that the independent and dependent variables possess an observed
relationship with one another.

The last requirement is that the relationship between the IV and the DV be nonspurious. This third criterion
is frequently the hardest to overcome in criminology and criminal justice research (indeed, all social sciences)
because human behavior is complicated, and each action a person engages in has multiple causes.
Disentangling these causal factors can be difficult or impossible.

Nonspuriousness: The causality requirement holding that the relationship between the independent variable and dependent variable
not be the product of a third variable that has been erroneously omitted from the analysis.

The reason spuriousness is a problem is that there could be a third variable that explains the DV as well as, or
even better than, the IV does. This third variable might partially or fully account for the relationship between
the IV and DV. The inadvertent exclusion of one or more important variables can result in erroneous
conclusions because the researcher might mistakenly believe that the IV strongly predicts the DV when, in
fact, the relationship is actually partially or entirely due to intervening factors. Another term for this problem
is omitted variable bias. When omitted variable bias (i.e., spuriousness) is present in an IV–DV relationship
but erroneously goes unrecognized, people can reach the wrong conclusion about a phenomenon. Research
Example 2.3 offers examples of the problem of omitted variables.

Omitted variable bias: An error that occurs as a result of unrecognized spuriousness and a failure to include important third variables
in an analysis, leading to incorrect conclusions about the relationship between the independent and dependent variables.

A final caution with respect to causality is that statistical analyses are examinations of aggregate trends.
Uncovering an association between an IV and a DV means only that the presence of the IV has the tendency to
be related to either an increase or a reduction in the DV in the sample as a whole—it is not an indication that

40

the IV–DV link inevitably holds true for every single person or object in the sample. For example, victims of

early childhood trauma are more likely than nonvictims to develop substance abuse disorders later in life (see
Dass-Brailsford & Myrick, 2010). Does this mean that every person who was victimized as a child has
substance abuse problems as an adult? Certainly not! Many people who suffer childhood abuse do not become
addicted to alcohol or other drugs. Early trauma is a risk factor that elevates the risk of substance abuse, but it
is not a guarantee of this outcome. Associations present in a large group are not uniformly true of all members
of that group.

Research Example 2.3 The Problem of Omitted Variables

In the 1980s and 1990s a media and political frenzy propelled the “crack baby” panic to the top of the national conversation. The
allegations were that “crack mothers” were abusing the drug while pregnant and were doing irreparable damage to their unborn
children. Stories of low-birth-weight, neurologically impaired newborns abounded. What often got overlooked, though, was the fact
that women who use crack cocaine while pregnant are also likely to use drugs such as tobacco and alcohol, which are known to harm
fetuses. These women also frequently have little or no access to prenatal nutrition and medical care. Finally, if a woman abuses crack
—or any other drug—while pregnant, she could also be at risk for mistreating her child after its birth (see Logan, 1999, for a
review). She might be socially isolated, as well, and have no support from her partner or family. There are many factors that affect
fetal and neonatal development, some under mothers’ control and some not; trying to tie children’s outcomes definitively to a single
drug consumed during mothers’ pregnancies is inherently problematic.

In the 1980s, policymakers and the public became increasingly concerned about domestic violence. This type of violence had
historically been treated as a private affair, and police tended to take a hands-off approach that left victims stranded and vulnerable.
The widely publicized results of the Minneapolis Domestic Violence Experiment suggested that arrest effectively deterred abusers,
leading to lower rates of recidivism. Even though the study’s authors said that more research was needed, states scrambled to enact
mandatory arrest laws requiring officers to make arrests in all substantiated cases of domestic violence. Subsequent experiments and
more detailed analyses of the Minneapolis data, however, called the effectiveness of arrest into question. It turns out that arrest has
no effect on some offenders and even increases recidivism among certain groups. Offenders’ employment status, in particular,
emerged as an important predictor of whether arrest deterred future offending. Additionally, the initial reduction in violence
following arrest frequently wore off over time, putting victims back at risk. Pervasive problems collecting valid, reliable data also
hampered researchers’ ability to reach trustworthy conclusions about the true impact of arrest (see Schmidt & Sherman, 1993, for a
review). The causes of domestic violence are numerous and varied, so it is unwise to assume that arrest will be uniformly

In sum, you should always be cautious when interpreting IV–DV relationships. It is better to think of IVs as
predictors and DVs as outcomes rather than to view them as causes and effects. As the adage goes, correlation
does not mean causation. Variables of all kinds are related to each other, but it is important not to leap
carelessly to causal conclusions on the basis of statistical associations.

41

Levels of Measurement

Every variable possesses a level of measurement. Levels of measurement are ways of classifying or describing
variable types. There are two overarching classes of variables: categorical (also sometimes called qualitative)
and continuous (also sometimes referred to as quantitative). Categorical variables comprise groups or
classifications that are represented with labels, whereas continuous variables are made of numbers that
measure how much of a particular characteristic a person or object possesses. Each of these variable types
contains two subtypes. This two-tiered classification system is diagrammed in Figure 2.1 and discussed in the
following sections.

Level of measurement: A variable’s specific type or classification. There are four types: nominal, ordinal, interval, and ratio.

Categorical variable: A variable that classifies people or objects into groups. There are two types: nominal and ordinal.

Continuous variable: A variable that numerically measures the presence of a particular characteristic. There are two types: interval
and ratio.

42

The Categorical Level of Measurement: Nominal and Ordinal Variables

Categorical variables are made up of categories. They represent ways of divvying up people and objects
according to some characteristic. Categorical variables are subdivided into two types: nominal and ordinal. The
nominal level of measurement is the most rudimentary of all the levels. It is the least descriptive and
sometimes the least informative. Race is an example of a nominal-level variable. See Tables 2.1 and 2.2 for
examples of nominal variables (see also Data Sources 2.1 for a description of the data set used in these tables).
The variable in Table 2.1 comes from a question on the survey asking respondents whether or not they
personally know a police officer assigned to their neighborhood. This variable is nominal because respondents
said “yes” or “no” in response and so can be grouped accordingly. In Table 2.2, the variable representing the
races of stopped drivers is nominal because races are groups into which people are placed. The labels offer
descriptive information about the people or objects within each category. Data are from the Police–Public
Contact Survey (PPCS).

Nominal variable: A classification that places people or objects into different groups according to a particular characteristic that
cannot be ranked in terms of quantity.

Figure 2.1 Levels of Measurement

43

Data Sources 2.1 The Police–Public Contact Survey

The Bureau of Justice Statistics (BJS; see Data Sources 2.3) conducts the Police–Public Contact Survey (PPCS) periodically as a
supplement to the National Crime Victimization Survey (NCVS; see Data Sources 1.2). Interviews are conducted in English only.
NCVS respondents aged 16 and older are asked about recent experiences they might have had with police. Variables include
respondent demographics, the reason for respondents’ most recent contact with police, whether the police used or threatened force
against the respondents, the number of officers present at the scene, whether the police asked to search respondents’ vehicles, and so
on (BJS, 2011). This data set is used by BJS statisticians to estimate the number of police–citizen contacts that take place each year
and is used by researchers to study suspect, officer, and situational characteristics of police–public contacts. The 2011 wave of the
PPCS is the most current one available at this time.

Gender is another example of a nominal variable. Table 2.3 displays the gender breakdown among people who
reported that they had sought help from the police within the past year.

Much information is missing from the nominal variables in Tables 2.1 through 2.3. For instance, the question
about knowing a local police officer does not tell us how often respondents talk to the officers they know or
whether they provide these officers with information about the area. Similarly, the race variable provides fairly
basic information. This is why the nominal level of measurement is lowest in terms of descriptiveness and
utility. These classifications represent only differences; there is no way to arrange the categories in any
meaningful rank or order. Nobody in one racial group can be said to have “more race” or “less race” than
someone in another category—they are merely of different races. The same applies to gender. Most people
identify as being either female or male, but members of one gender group do not have more or less gender
relative to members of the other group.

One property that nominal variables possess (and share with other levels) is that the categories within any
given variable are mutually exclusive and exhaustive. They are mutually exclusive because each unit in the data
set (person, place, and so on) can fall into only one category. They are exhaustive because all units have a
category that applies to them. For example, a variable measuring survey respondents’ criminal histories that
asks them if they have been arrested “0–1 time” or “1–2 times” would not be mutually exclusive because a
respondent who has been arrested once could circle both answer options. This variable would also violate the
principle of exhaustiveness because someone who has been arrested three or more times cannot circle any
available option because neither is applicable. To correct these problems, the answer options could be changed
to, for instance, “no arrests,” “1–2 arrests,” and “3 or more arrests.” Everyone filling out the survey would have
one, and only one, answer option that accurately reflected their experiences.

Mutually exclusive: A property of all levels of measurement whereby there is no overlap between the categories within a variable.

Exhaustive: A property of all levels of measurement whereby the categories or range within a variable capture all possible values.

44

Ordinal variables are one step up from nominal variables in terms of descriptiveness because they can be
ranked according to the quantity of a characteristic possessed by each person or object in a sample. University
students’ class level is an ordinal variable because freshmen, sophomores, juniors, and seniors can be rank-
ordered according to how many credits they have earned. Numbers can also be represented as ordinal
classifications when the numbers have been grouped into ranges like those in Table 2.3, where the income
categories of respondents to the General Social Survey (GSS; see Data Sources 2.2) are shown. Table 2.4
displays another variable from the PPCS. This survey question queried respondents about how often they
drive. Respondents were offered categories and selected the one that most accurately described them.

Ordinal variable: A classification that places people or objects into different groups according to a particular characteristic that can be
ranked in terms of quantity.

Ordinal variables are useful because they allow people or objects to be ranked in a meaningful order. Ordinal
variables are limited, though, by the fact that no algebraic techniques can be applied to them. This includes
ordinal variables made from numbers such as those in Table 2.3. It is impossible, for instance, to subtract

where µ1 = the mean ratio in municipal departments, and

µ2 = the mean ratio in sheriffs’ offices.

It might seem strange to write the hypotheses using the symbol for the population mean (µ) rather than the

265

sample mean ( ), but remember that in inferential statistics, it is the population parameter that is of interest.
We use the sample means to make a determination about the population mean(s). Basically, there are two
options: There might be one population from which the samples derive (sampling error), or each sample
might represent its own population (true difference). If the null is true and there is, in fact, no relationship
between agency type and officer-to-resident ratio, then we would conclude that all the agencies come from the
same population. If, instead, the alternative is true, then there are actually two populations at play here—
municipal and county. Figure 9.1 shows this idea pictorially.

Figure 9.1 One Population or Two?

Another reason a researcher might conduct a hypothesis test is to determine if men and women differ in terms
of how punitive they feel toward people who have been convicted of crimes. The General Social Survey (GSS;
see Data Sources 2.2) asks people whether they favor or oppose capital punishment for people convicted of
murder. Among men, 31% oppose capital punishment; this number is 40% among women. A researcher
might want to know whether this difference represents a genuine “gender effect” or whether it is merely
chance variation. The null and alternative could be set up as such:

H0: Men and women oppose the death penalty equally; the proportions are equal and there is no relationship

between gender and death penalty attitudes.
H1: Men are less likely to oppose the death penalty than women are; the proportions are unequal and there is a

relationship between gender and death penalty attitudes.

Formally stated using symbols, the null and alternative are written as

H0: P1 = P2

Step 2. Identify the distribution and compute the degrees of freedom .

As mentioned, the χ ² statistic has its own theoretical probability distribution—it is called the χ 2 distribution.
The χ ² table of critical values is located in Appendix D. Like the t curve, the χ ² distribution is a family of
differently shaped curves, and each curve’s shape is determined by degrees of freedom (df) . At small df values,

279

the distribution is extremely nonnormal; as the df increases, the distribution gradually normalizes somewhat,
but remains markedly different from a normal curve. Unlike the t curve, df for χ ² are based not on sample size
but, rather, on the size of the crosstabs table (i.e., the number of rows and columns). Looking at Table 10.1,
you can see that there are two rows (female and male) and two columns (favor and oppose) . The marginals (row
and column totals) are not included in the df calculation. The formula for degrees of freedom in a χ ²
distribution is

where

r = the number of rows, excluding the marginal and
c = the number of columns, excluding the marginal.

χ ² distribution: The sampling or probability distribution for chi-square tests. This curve is nonnormal and contains only positive
values. Its shape depends on the size of the crosstabs table.

Table 10.1 has two rows and two columns. Inserting these into the formula, the result is

df = (2 – 1)(2 – 1) = (1)(1) = 1

Step 3. Identify the critical value of the test statistic and state the decision rule .

Remember in Chapter 8 when we used the ⍺ (alpha) level to find a particular value of z or t to plug into a
confidence interval formula? We talked about ⍺ being the proportion of cases in the distribution that are out
in the tail beyond a particular value of z or t. You learned that the critical value is the number that cuts ⍺ off
the tail of the distribution. Alpha is the probability that a certain value will fall in the tail beyond the critical
value. If ⍺ = .05, for instance, then the values of the test statistic that are out in the tail beyond the critical
value constitute just 5% of the entire distribution. In other words, these values have a .05 or less probability of
occurring if, indeed, there is no relationship between the two variables being analyzed. These values, then,
represent observed outcomes that are extremely unlikely if the null hypothesis is true.

The process of finding the critical value of χ ² (symbolized χ ²crit) employs the same logic as that for finding

critical values of z or t. The value of χ ²crit depends on two considerations: the ⍺ level and the df. Alpha must

be set a priori so that the critical value can be determined before the test is run. Alpha can technically be set at
any number, but .05 and .01 are the most commonly used ⍺ levels in criminal justice and criminology.

For the present example, we will choose ⍺ = .05. Using Appendix D and finding the number at the
intersection of ⍺ = .05 and df = 1, it can be seen that χ ²crit = 3.841. This is the value that cuts .05 (i.e., 5%) of

the cases off the tail of the χ ² distribution. The obtained value of χ ² (symbolized χ ²obt) that is calculated in

Step 4 must exceed the critical value in order for the null to be rejected. Figure 10.1 illustrates this concept.

Obtained value: The value of the test statistic arrived at using the mathematical formulas specific to a particular test. The obtained

280

value is the final product of Step 4 of a hypothesis test.

The decision rule is the a priori statement regarding the action you will take with respect to the null
hypothesis based on the results of the statistical analysis that you are going to do in Step 4. The final product
of Step 4 will be the obtained value of the test statistic. The null hypothesis will be rejected if the obtained
value exceeds the critical value. If χ ²obt > χ ²crit, then the probability of obtaining this particular χ ²obt value by

chance alone is less than .05. Another way to think about it is that the probability of H0 being true is less than

.05. This is unlikely indeed! This would lead us to reject the null in favor of the alternative. The decision rule
for the current test is the following: If χ ²obt > 3.841 , H0 will be rejected.

Step 4. Compute the obtained value of the test statistic .

Now that we know the critical value, it is time to complete the analytical portion of the hypothesis test. Step 4
will culminate in the production of the obtained value, or χ ²obt. In substantive terms, χ ²obt is a measure of the

difference between observed frequencies (fo) and expected frequencies (fe). Observed frequencies are the

empirical values that appear in the crosstabs table produced from the sample-derived data set. Expected
frequencies are the frequencies that would appear if the two variables under examination were unrelated to one
another. In other words, the expected frequencies are what you would see if the null hypothesis were true. The
question is whether observed equals expected (indicating that the null is true and the variables are unrelated)
or whether there is marked discrepancy between them (indicating that the null should be rejected because
there is a relationship).

Observed frequencies: The empirical results seen in a contingency table derived from sample data. Symbolized fo.

Expected frequencies: The theoretical results that would be seen if the null were true, that is, if the two variables were, in fact,
unrelated. Symbolized fe.

Let’s talk about observed and expected frequencies a little more before moving on. Table 10.2 is a crosstabs
table for two hypothetical variables that are totally unrelated to one another. The 100 cases are spread evenly
across the four cells of the table. The result is that knowing which class a given case falls into on the IV offers
no information about which class that case is in on the DV. For instance, if you were faced with the question,
“Who is more likely to fall into category Y on the DV, someone in category A or in category B?” your answer
would be that both options are equally likely. The distribution in Table 10.2 illustrates the null hypothesis in a
chi-square test—the null predicts that the IV does not help us understand the DV.

Table 10.3 shows a distribution of hypothetical observed frequencies. There is a clear difference between this
distribution and that in Table 10.2. In Table 10.2, it is clear that knowing what category a person is in on the
IV does help predict their membership in a particular category on the DV. Someone in category A is more
likely to be in category Y than in category X , whereas someone in category B is more likely to be in X than in
Y. If you had a distribution like that in Table 10.2 and someone asked you to predict whether someone in
category A was in X or Y , you would have a 50/50 shot at being right; in other words, you would simply have
to guess. You would be wrong half the time (25 out of 50 guesses). On the other hand, if you were looking at
Table 10.3 and someone asked you the same question, you would predict the person to be in Y. You would

281

still be wrong occasionally, but the frequency of incorrect guesses would diminish from 50% to 20% (10 out of
50 guesses).

The chi-square analysis is, therefore, premised on a comparison of the frequencies that are observed in the
data and the frequencies that would be expected, theoretically, if there were no relationship between the two
variables. If there is minimal difference between observed and expected, then the null will be retained. If the
difference is large, the null must be rejected. We already know the observed frequencies, so the first task in
Step 4 is to calculate the expected frequencies. This must be done for each cell of the crosstabs table.

The formula for an expected frequency count is

where

= the expected frequency for cell i ,
rmi = the row marginal of cell i ,

cmi = the column marginal of cell i , and

N = the total sample size.

Since the expected frequency calculations must be done for each cell, it is a good idea to label them as a way to
keep track. This is the reason why the numbers in Table 10.1 are accompanied by superscripts. The letters A
through D identify the cells. Using Formula 10(2) for each cell,

282

Once the expected frequencies have been calculated, χ2
obt can be computed using the formula

where

= the observed frequency of cell i and

= the expected frequency of cell i.

Formula 10(3) looks intimidating, but it is actually just a sequence of arithmetic. First, each cell’s expected
value will be subtracted from its observed frequency. Second, each of these new terms will be squared and
divided by the expected frequency. Finally, these terms will be summed. Recall that the uppercase sigma (Σ) is
a symbol directing you to sum whatever is to the right of it.

The easiest way to complete the steps for Formula 10(3) is by using a table. We will rearrange the values from

Table 10.1 into a format allowing for calculation of χ2
obt. Table 10.4 shows this.

The obtained value of the test statistic is found by summing the final column of the table, as such:

χ2
obt = 3.14 + 5.65 + 3.71 + 6.68 = 19.18

There it is! The obtained value of the test statistic is 19.18.

Step 5. Make a decision about the null and state the substantive conclusion .

It is time to decide whether to retain or reject the null. To do this, revisit the decision rule laid out in Step 3.
It was stated that the null would be rejected if the obtained value of the test statistic exceeded 3.841. The

obtained value turned out to be 19.18, so χ2
obt > χ2

crit and we therefore reject the null. The alternative

hypothesis is what we take as being the true state of affairs. The technical term for this is statistical
significance. A statistically significant result is one in which the obtained value exceeds the critical value and
the variables are determined to be statistically related to one another.

Statistical significance: When the obtained value of a test statistic exceeds the critical value and the null is rejected.

283

The final stage of hypothesis testing is to interpret the results. People who conduct statistical analyses are
responsible for communicating their findings in a manner that effectively resonates with their audience,
whether it is an audience comprising scholars, practitioners, the public, or the media. It is especially important
when discussing statistical findings with lay audiences that clear explanations be provided about what a set of
quantitative results actually means in a substantive, practical sense. This makes findings accessible to a wide
array of audiences who might find criminological results interesting and useful.

In the context of the present example, rejecting the null leads to the conclusion that the IV and the DV are
statistically related; that is, there is a statistically significant relationship between gender and death-penalty
attitudes. Another way of saying this is that there is a statistically significant difference between men and
women in their attitudes toward capital punishment. Note that the chi-square test does not tell us about the

precise nature of that difference. Nothing in χ2
obt conveys information about which gender is more supportive

or more opposed than the other. This is not a big problem with two-class IVs. Referring back to the
percentages reported earlier, we know that a higher percentage of women than men oppose capital
punishment, so we can conclude that women are significantly less supportive of the death penalty (40%
oppose) compared to men (31% oppose). We will see later, when we use IVs that have more than two classes,
that we are not able to so easily identify the location of the difference.

Note, as well, the language used in the conclusion—it is phrased as an association and there is no cause-and-
effect assertion being advanced. This is because the relationship that seems to be present in this bivariate
analysis could actually be the result of unmeasured omitted variables that are the real driving force behind the
gender differences (recall from Chapter 2 that this is the problem of spuriousness and its counterpart, the
omitted variable bias). We have not, for instance, measured age, race, political beliefs, or religiosity, all of
which might relate to people’s beliefs about the effectiveness and morality of capital punishment. If women
differ from men systematically on any of these characteristics, then the gender–attitude relationship might be
spurious, meaning it is the product of another variable that has not been accounted for in the analysis. It is
best to keep your language toned down and to use words like relationship and association rather than cause or
effect.

Research Example 10.2 Do Victim or Offender Race Influence the Probability That a Homicide Will Be Cleared and That a Case

284

Will Be Tried as Death-Eligible?

A substantial amount of research has been conducted examining the impact of race on the use of the death penalty. This research
shows that among murder defendants, blacks have a higher likelihood of being charged as death-eligible (i.e., the prosecutor files
notice that he or she intends to seek the death penalty). The real impact of race, however, is not on the defendants’ part but, rather,
on the victims’: People who kill whites are more likely than people who kill blacks to be prosecuted as death-eligible. Blacks accused
of killing whites are the group most likely to face a death sentence, even controlling for relevant legal factors. There are open
questions, though, about what happens prior to prosecutors’ decisions about whether or not to seek the death penalty. In particular,
it is not clear what effect police investigations and clearance rates have in shaping the composition of cases that reach prosecutors’
desks. Petersen (2017) used data from Los Angeles County, California, to examine two stages in the justice-system response to
homicide: clearance and the decision to seek death. The table shows the racial breakdown of victims and defendants across these
categories.

Source: Adapted from Table 1 in Petersen (2017).

The contingency table shows no overt discrepancies for black victims (i.e., their representation in all three categories remains at
roughly one-third), but murders involving Latino victims (which make up 50% of all murders) are slightly less likely to be cleared
(48%) and much less likely to be prosecuted as death-eligible (38%). White victims, by contrast, make up only 15% of victims but
30% of victims in death-eligible trials. Looking at defendant race, blacks constitute 41% of the people arrested for homicide and 48%
of death-eligible defendants, whites likewise are somewhat overrepresented as defendants (13% versus 19%), while Latinos are
markedly underrepresented among defendants (46% compared to 33%). Of course, these relationships are bivariate and do not
account for legal factors (i.e., aggravating and mitigating circumstances) that might increase or reduce a prosecutor’s inclination to
seek the death penalty.

To thoroughly examine the relationship between race and the probability of death-eligible charges being filed, Petersen (2017)
estimated a series of predicted probabilities, which are displayed in the figure. These probabilities show the interaction between
victim and defendant race and are adjusted to control for case characteristics. The findings mirror previous research showing that
black and Latino defendants are more likely to face death-eligible charges when victims are white. White defendants are least likely
to face death when victims are Latino and most likely when they are black, but these differences were not statistically significant.

285

Source: Adapted from Figure 1 in Petersen (2017).

For the second example, let’s use the GSS again and this time test for a relationship between education level
and death-penalty attitudes. To make it interesting, we will split the data by gender and analyze males and
females in two separate tests. We will start with males (see Table 10.5). Using an alpha level of .01, we will
test for a relationship between education level (the IV) and death-penalty attitudes (the DV). All five steps
will be used.

Step 1. State the null (H0) and alternative (H1) hypotheses .

H0 : χ2 = 0

H1 : χ2 > 0

Step 2. Identify the distribution and compute the degrees of freedom .

The distribution is χ2 and the df = (r – 1)(c – 1) = (3 – 1)(2 – 1) = (2)(1) = 2.

Step 3. Identify the critical value of the test statistic and state the decision rule .

With ⍺ =.01 and df = 2, χ2
crit = 9.210. The decision rule is that if χ2 obt > 9.210 , H0 will be rejected.

286

Step 4. Compute the obtained value of the test statistic .

First, we need to calculate the expected frequencies using Formula 10(3). The frequencies for the first three
cells (labeled A, B, and C, left to right) are as follows:

Next, the computational table is used to calculate χ2
obt (Table 10.6). As you can see in the summation cell in

the last column, the obtained value of the test statistic is 19.74.

Before moving to Step 5, take note of a couple points about the chi-square calculation table. Both of these
column always sums to the sample size. This is because we have not altered the number of cases in the sample:
We have merely redistributed them throughout the table. After calculating the expected frequencies, sum
them to make sure they add up to N. Second, the column created by subtracting the expected frequencies
from the observed frequencies will always sum to zero (or within rounding error of it). The reason for this is,
again, that no cases have been added to or removed from the sample. There are some cells that have observed

frequencies that are less than expected and others where is greater than . In the end, these variations
cancel each other out. Always sum both of these columns as you progress through a chi-square calculation.

287

Step 5. Make a decision about the null and state the substantive conclusion .

The decision rule stated that the null would be rejected if the obtained value exceeded 9.210. Since χ2
obt ended

up being greater than the critical value (i.e., 19.74 > 9.210), the null is rejected. There is a statistically
significant relationship between education and death-penalty attitudes among male respondents. Calculating
row percentages from the data in Table 10.5 shows that approximately 27% of men with high school diplomas
or less oppose the death penalty, roughly 22% with some college (no degree) are in opposition, and 40% with
a bachelor’s degree or higher do not support it. It seems that men with college educations that include at least
a bachelor’s degree stand out from the other two educational groups in their level of opposition to capital
punishment. We are not able to say with certainty, however, whether all three groups are statistically
significantly different from the others or whether only one of them stands apart. You can roughly estimate
differences using row percentages, but you have to be cautious in your interpretation. The chi-square test tells
you only that at least one group is statistically significantly different from at least one other group.

Let’s repeat the same analysis for female respondents. Again, we will set alpha at .01 and proceed through the
five steps. The data are in Table 10.7.

Step 1. State the null (H0) and alternative (H1) hypotheses .

H0: χ2 = 0

H1: χ2 > 0

Step 2. Identify the distribution and compute the degrees of freedom .

The distribution is χ2 and df = (3 – 1)(2 – 1) = 2.

Step 3. Identify the critical value of the test statistic and state the decision rule .

With ⍺ =.01 and df = 2, χ2
crit = 9.210. The decision rule is that if χ2

obt > 9.210 , H0 will be rejected.

288

289

Learning Check 10.1

In the third example, the calculations are not shown. Check your mastery of the computation of expected frequencies by doing the
calculations yourself and making sure you arrive at the same answers shown in Table 10.8.

Step 4. Compute the obtained value of the test statistic .

Step 5. Make a decision about the null and state the substantive conclusion .

The decision rule stated that the null would be rejected if the obtained value exceeded 9.210. Since χ2
obt =

11.23, the null is rejected. There is a statistically significant relationship between education and death-penalty
attitudes among female respondents; it appears that women’s likelihood of favoring or opposing capital
punishment changes with their education level. Another way to phrase this is that there are significant
differences between women of varying levels of education. As we did with male respondents, we can use row
percentages to gain a sense of the pattern. Opposition to the death penalty is approximately 40%, 26%, and
44% among women with high school diploma or less, some college, or a bachelor’s degree or higher,
respectively. This is similar to the pattern of opposition seen among men in that those with some college were
the most supportive of capital punishment and those with a college degree were the least supportive, but
among women the group with a high school diploma or less were nearly as likely as those with college degrees
to oppose this penalty.

290

Learning Check 10.2

One criticism of the chi-square test for independence is that this statistic is sensitive to sample size. The problem lies in the way that

χ2obt is calculated. Sample size can cause a test statistic to be significant or not significant, apart from the actual distribution of observed

values. For instance, a crosstabs table with a fairly even distribution of scores might yield a statistically significant χ2obt if N is large.

Similarly, a distribution that looks decidedly uneven (i.e., where there is an apparent relationship between the IV and the DV) can

produce nonsignificant χ2obt if N is small. To see this for yourself, recalculate χ2obt using the death-penalty and gender data but with a

sample size of 88 instead of 2,379. Make a decision about the null hypothesis, recalling that χ ²crit = 3.841.

Are you surprised by the results? This demonstrates the importance of being cautious when you interpret statistical significance. Do not
leap to hasty conclusions; be aware that there are factors (such as sample size) that can impact the results of a statistical test irrespective of
the relationship between the IVs and the DVs.

291

Measures of Association

The chi-square test alerts you when there is a statistically significant relationship between two variables, but it
is silent as to the strength or magnitude of that relationship. We know from the previous two examples, for
instance, that gender and education are related to attitudes toward capital punishment, but we do not know
the magnitudes of these associations: They could be strong, moderate, or weak. This question is an important
one because a trivial relationship—even if statistically significant in a technical sense—is not of much
substantive or practical importance. Robust relationships are more meaningful. To illustrate this, suppose an
evaluation of a gang-prevention program for youth was declared a success after researchers found a statistically
significant difference in gang membership rates among youth who did and did not participate in the program.
Digging deeper, however, you learn that 9% of the youth who went through the program ended up joining
gangs, compared to 12% of those who did not participate. While any program that keeps kids out of gangs is
laudable, a reduction of three percentage points can hardly be considered a resounding success. We would
probably want to continue searching for a more effective way to prevent gang involvement. Measures of
association offer insight into the magnitude of the differences between groups so that we can figure out how
strong the overlap is.

Measures of association: Procedures for determining the strength or magnitude of a relationship after a chi-square test has revealed a
statistically significant association between two variables.

There are several measures of association, and this chapter covers four of them. The level of measurement of
the IV and the DV dictate which measures are appropriate for a given analysis. Measures of association are
computed only when the null hypothesis has been rejected—if the null is not rejected and you conclude that
there is no relationship between the IV and the DV, then it makes no sense to go on and try to interpret an
association you just said does not exist. The following discussion will introduce four tests, and the next section

will show you how to use SPSS to compute χ2
obt and accompanying measures of association.

Cramer’s V can be used when both of the variables are nominal or when one is ordinal and the other is
nominal. It is symmetric, meaning that it always takes on the same value regardless of which variable is
posited as the independent and which the dependent. This statistic ranges from 0.00 to 1.00, with higher
values indicative of stronger relationships and values closer to 0.00 suggestive of weaker associations. Cramer’s
V is computed as

where

χ2
obt = the obtained value of the test statistic,

N = the total sample size, and
m = the smaller of either (r – 1) or (c – 1).

292

Cramer’s V : A symmetric measure of association for χ2 when the variables are nominal or one is ordinal and the other is nominal. V
ranges from 0.00 to 1.00 and indicates the strength of the relationship. Higher values represent stronger relationships. Identical to
phi in 2 × 2 tables.

In the first example we saw in this chapter, where we found a statistically significant relationship between

gender and death-penalty attitudes, χ2
obt = 19.18, N = 2,379, and there were two rows and two columns, so m

= 2 – 1 = 1. Cramer’s V is thus

This value of V suggests a weak relationship. This demonstrates how statistical significance alone is not
indicative of genuine importance or meaning—a relationship might be significant in a technical sense but still
very in significant in practical terms. This is due in no small part to the chi-square test’s sensitivity to sample
size, as discussed earlier. Think back to the percentages we calculated for this table. Approximately 60% of
women and 69% of men favored the death penalty. This is a difference, to be sure, but it is not striking. If any
given respondent were randomly selected out of this sample, there would be a roughly two-thirds likelihood
that the person would support capital punishment, irrespective of her or his gender. It is wise, then, to be
cautious in interpreting statistically significant results—statistical significance does not always translate into
practical significance.

When both variables under examination are nominal, lambda is an option. Like Cramer’s V , lambda ranges
from 0.00 to 1.00. Unlike Cramer’s V , lambda is asymmetric, meaning that it requires that one of the
variables be clearly identified as the independent and the other as the dependent. This is because lambda is a
proportionate reduction in error measure.

Lambda: An asymmetric measure of association for χ2 when the variables are nominal. Lambda ranges from 0.00 to 1.00 and is a
proportionate reduction in error measure.

Proportionate reduction in error (PRE) refers to the extent to which knowing a person’s or object’s placement
on an IV helps predict that person’s or object’s classification on the dependent measure. Referring back to
Table 10.1, if you were trying to predict a given individual’s attitude toward capital punishment and the only
piece of information you had was the frequency distribution of this DV (i.e., you knew that 1,530 people in
the sample support capital punishment and 849 oppose it), then your best bet would be to guess the modal
category (mode = support) because this would produce the fewest prediction errors. There would, though, be a
substantial number of these errors—849, to be exact!

Now, suppose that you know a given person’s gender or education level, both of which we found to be
significantly related to capital punishment attitudes. The next logical inquiry is the extent to which this
knowledge improves our accuracy when predicting whether that person opposes or favors the death penalty. In
other words, know that we would make 849 errors if we simply guessed the mode for each person in the
sample, and now we want to know how many fewer mistakes we would make if we knew each person’s gender

293

or education level. This is the idea behind PRE measures like lambda. Let’s do an example using the
relationship between education and death penalty attitudes among women.

Lambda is symbolized as λ (the Greek lowercase letter lambda) and is calculated as

where

E1 = Ntotal – NDV mode and

This equation and its different components looks strange, but, basically, E1 represents the number of

prediction errors made when the IV is ignored (i.e., predictions based entirely on the mode of the DV), and
E2 reflects the number of errors made when the IV is taken into account. Using the education and death-

penalty data from Table 10.7, we can first calculate E1 and E2:

E1 = 1,289 – 778 = 511,

E2 = (795 – 479) + (115 – 85) + (379 – 214) = 511

and lambda is

Lambda is most easily interpreted by transforming it to a percentage. A lambda of zero shows us that knowing
women’s education levels does not reduce prediction errors. This makes sense, as you can see in Table 10.7
that “favor” is the mode across all three IV categories. This lack of variation accounts for the overall zero
impact of the IV on the DV. Again, we see a statistically significant but substantively trivial association
between variables. This makes sense, because we would not expect a single personal characteristic (like
education) to have an enormous influence on someone’s attitudes. As noted previously, we have not measured
women’s religiosity, age, social and political views, and other demographic and background factors that are

There is a third measure for nominal data that bears brief mention, and that is phi. Phi can only be used on 2
× 2 tables (i.e., two rows and two columns) with nominal variables. It is calculated and interpreted just like
Cramer’s V with the exception that phi does not account for the number of rows or columns in the crosstabs
table: since it can only be applied to 2 × 2 tables, m will always be equal to 1.00. For 2 × 2 tables, Cramer’s V is
identical to phi, but since Cramer’s V can be used for tables of any size, it is more useful than phi is.

294

Phi: A symmetric measure of association for χ2 with nominal variables and a 2 × 2 table. Identical to Cramer’s V.

When both variables are ordinal or when one is ordinal and the other is dichotomous (i.e., has two classes),
Goodman and Kruskal’s gamma is an option. Gamma is a PRE measure but is symmetric—unlike lambda—
and ranges from –1.00 to +1.00, with zero meaning no relationship, –1.00 indicating a perfect negative
relationship (as one variable increases, the other decreases), and 1.00 representing a perfect positive
relationship (as one increases, so does the other). Generally speaking, gamma values between 0 and ±.19 are
considered weak, between ±.20 and ±.39 moderate, ±.40 to ±.59 strong, and ±.60 to ±1.00 very strong.

Goodman and Kruskal’s gamma: A symmetric measure of association used when both variables are ordinal or one is ordinal and the
other is dichotomous. Ranges from –1.00 to +1.00.

Two other measures available when both variables are ordinal are Kendall’s taub and Kendall’s tauc. Both are

symmetric. Taub is used when the crosstabs table has an equal number of rows and columns, and tauc is used

when they are unequal. Both tau statistics range from –1.00 to +1.00. They measure the extent to which the
order of the observations in the IV match the order in the DV; in other words, as cases increase in value on
the IV, what happens to their scores on the DV? If their scores on the dependent measure decrease, tau will
be negative; if they increase, tau will be positive; and if they do not display a clear pattern (i.e., the two
variables have very little dependency), tau will be close to zero. Similar to the tau measures is Somers’ d . This
measure of association is asymmetric and used when both variables are ordinal. Its range and interpretation
mirror those of tau . The calculations of gamma, tau, and d are complicated, so we will refrain from doing
them by hand and will instead use SPSS to generate these values.

Kendall’s taub: A symmetric measure of association for two ordinal variables when the number of rows and columns in the crosstabs

table are equal. Ranges from –1.00 to +1.00.

Kendall’s tauc: A symmetric measure of association for two ordinal variables when the number of rows and columns in the crosstabs

table are unequal. Ranges from –1.00 to +1.00.

Somers’ d : An asymmetric measure of association for two ordinal variables. Ranges from –1.00 to +1.00.

None of the measures of association discussed here is perfect; each has limitations and weaknesses. The best
strategy is to examine two or more measures for each analysis and use them to gain a comprehensive picture of
the strength of the association. There will likely be variation among them, but the differences should not be
wild, and all measures should lean in a particular direction. If they are all weak or are all strong, then you can
safely arrive at a conclusion about the level of dependency between the two variables.

295

SPSS

The SPSS program can be used to generate χ2
obt , determine statistical significance, and produce measures of

association. The chi-square analysis is found via the sequence Analyze → Descriptive Statistics → Crosstabs. Let
us first consider the gender and capital punishment example from earlier in the chapter. Figure 10.2 shows the
dialog boxes involved in running this analysis in SPSS. Note that you must check the box labeled Chi-square
in order to get a chi-square analysis; if you do not check this box, SPSS will merely give you a crosstabs table.
This box is opened by clicking Statistics in the crosstabs window. By default, SPSS provides only observed
frequencies in the crosstabs table. If you want expected frequencies or percentages (row or column), you can
go into Cells and request them. Since both of these variables are nominal, lambda and Cramer’s V are the
appropriate measures of association. Figure 10.3 shows the output for the chi-square test, and Figure 10.4
displays the measures of association.

The obtained value of the χ2
obt statistic is located on the line labeled Pearson Chi-Square. You can see in

Figure 10.3 that χ2
obt = 19.182, which is identical to the value we obtained by hand. The output also tells you

whether or not the null should be rejected, but it does so in a way that we have not seen before. The SPSS
program gives you what is called a p value. The p value tells you the exact probability of the obtained value of

the test statistic: The smaller p is, the more unlikely the χ2
obt is if the null is true, and, therefore, the

probability that the null is, indeed, correct. The p value in SPSS χ2 output is the number located at the
intersection of the Asymp. Sig. (2-sided) column and the Pearson Chi-Square row. Here, p = .000. What you do
is compare p to ⍺. If p is less than ⍺, it means that the obtained value of the test statistic exceeded the critical
value, and the null is rejected; if p is greater than ⍺, the null is retained. Since in this problem ⍺ was set at .05,
the null hypothesis is rejected because .000 < .05. There is a statistically significant relationship between gender and death-penalty attitudes. Of course, as we have seen, rejection of the null hypothesis is only part of the story because the χ2 statistic does not offer information about the magnitude or strength of the relationship between the variables. For this, we turn to measures of association. p value: In SPSS output, the probability associated with the obtained value of the test statistic. When p < ⍺, the null hypothesis is rejected. Figure 10.2 Running a Chi-Square Test and Measures of Association in SPSS 296 Figure 10.3 Chi-Square Output Figure 10.4 Measures of Association Judging by both Cramer’s V and lambda, this relationship is very weak. We actually already knew this because we calculated V by hand and arrived at .10, which is within rounding error of the .09 SPSS produces. Lambda is zero, which means that knowing people’s gender does not reduce the number of errors made in predicting their death-penalty attitudes. As noted earlier, using multiple tests of association helps provide confidence in the conclusion that the association between these two variables, while statistically significant, is tenuous in a 297 substantive sense. In other words, knowing someone’s gender only marginally improves our ability to predict that person’s attitudes about capital punishment. Chapter Summary This chapter introduced the chi-square test of independence, which is the hypothesis-testing procedure appropriate when both of the variables under examination are categorical. The key elements of the χ2 test are observed frequencies and expected frequencies. Observed frequencies are the empirical results seen in the sample, and expected frequencies are those that would appear if the null hypothesis were true and the two variables unrelated. The obtained value of chi-square is a measure of the difference between observed and expected, and comparing χ2obt to χ2crit for a set ⍺ level allows for a determination of whether the null hypothesis should be retained or rejected. When the null is retained (i.e., when χ2obt < χ2crit ), the substantive conclusion is that the two variables are not related. When the null is rejected (when χ2obt >χ2crit ), the conclusion is that there is a relationship between them. Statistical significance, though, is

only a necessary and not a sufficient condition for practical significance. The chi-square statistic does not offer information about the
strength of a relationship and how substantively meaningful this association is.

For this, measures of association are turned to when the null has been rejected. Cramer’s V , lambda, Goodman and Kruskal’s
gamma, Kendall’s taua and taub, and Somers’ d are appropriate in any given situation depending on the variables’ levels of

measurement and the size of the crosstabs table. SPSS can be used to obtain chi-square tests, p values for determining statistical
significance, and measures of association. When p < ⍺, the null is rejected, and when p > ⍺, it is retained. You should always generate

measures of association when you run χ2 tests yourself, and you should always expect them from other people who run these analyses
and present you with the results. Statistical significance is important, but the magnitude of the relationship tells you how meaningful
the association is in practical terms.

Thinking Critically

1. Suppose you read a report claiming that children raised in families with low socioeconomic status are less likely to go to
college compared to children raised in families with middle and upper income levels. The news story cites college
participation rates of 20%, 35%, and 60% among low, middle, and upper socioeconomic statuses, respectively, and explains
these differences as meaning that children raised in poor families are less intelligent or less ambitious than those from better-
off families. Do you trust this conclusion? Why or why not? If you do not, what more do you need to know about these data
before you can make a decision about the findings, and what they mean for the relationship between family income and
children’s college attendance?

2. Two researchers are arguing about statistical findings. One of them believes that any statistically significant result is
important, irrespective of the magnitude of the association between the IVs and the DVs. The other one contends that
statistical significance is meaningless if the association is weak. Who is correct? Explain your answer. Offer a hypothetical or
real-world example to illustrate your point.

Review Problems

1. A researcher wants to test for a relationship between the number of citizen complaints that a police officer receives and
whether that officer commits serious misconduct. He gathers a sample of officers and records the number of complaints that
have been lodged against them (0 – 2, 3 – 5, 6+) and whether they have ever been written up for misconduct (yes or no). Can
he use a chi-square to test for a relationship between these two variables? Why or why not?

2. A researcher wishes to test for a relationship between age and criminal offending. She gathers a sample and for each person,
she collects his or her age (in years) and whether that person has ever committed a crime (yes or no). Can she use a chi-
square to test for a relationship between these two variables? Why or why not?

3. A researcher is interested in finding out whether people who drive vehicles that are in bad condition are more likely than
those driving better cars to get pulled over by police. She collects a sample and codes each person’s vehicle’s condition (good,

298

fair, poor) and the number of times that person has been pulled over (measured by respondents writing in the correct
number). Can she use a chi-square to test for a relationship between these two variables? Why or why not?

4. A researcher is studying the effectiveness of an in-prison treatment program in reducing post-release recidivism. He gathers a
sample of recently released prisoners and records, for each person, whether he or she participated in a treatment program
while incarcerated (yes or no) and whether that person committed a new crime within 6 months of release (yes or no). Can
he use a chi-square to test for a relationship between these two variables? Why or why not?

5. Is a criminal defendant’s gender related to the type of sentence she or he receives? A researcher collects data on defendants’
gender (male or female) and sentence (jail, probation, fine).

1. Which of these variables is the IV, and which is the DV?
2. Identify each variable’s level of measurement.
3. How many rows and columns would the crosstabs table have?

6. Is the value of the goods stolen during a burglary related to the likelihood that the offender will be arrested? A researcher
collects data on the value of stolen goods (\$299 or less, \$300–\$599, \$600 and more) and on whether the police arrested
someone for the offense (yes or no).

1. Which of these variables is the IV, and which is the DV?
2. Identify each variable’s level of measurement.
3. How many rows and columns would the crosstabs table have?

7. Is the crime for which a person is convicted related to the length of the prison sentence she or he receives? A research gathers
data on crime type (violent, property, drug) and sentence length (18 months or less, 19–30 months, 31 or more months).

1. Which of these variables is the IV, and which is the DV?
2. Identify each variable’s level of measurement.
3. How many rows and columns would the crosstabs table have?

8. Is a victim’s gender related to whether or not the offender will be convicted for the crime? A researcher collects data on
victim gender (male or female) and whether the offender was convicted (yes or no).

1. Which of these variables is the IV, and which is the DV?
2. Identify each variable’s level of measurement.
3. How many rows and columns would the crosstabs table have?

9. It might be expected that jails that offer alcohol treatment programs to inmates also offer psychiatric counseling services,
since alcohol abuse is frequently a symptom of an underlying psychological problem. The following table displays data from a
random sample from the Census of Jails (COJ). With an alpha level of .01, conduct a five-step chi-square hypothesis test to
determine whether the two variables are independent.

10. Is there an association between the circumstances surrounding a violent altercation that results in a shooting and the type of
firearm used? The Firearm Injury Surveillance Study (FISS) records whether the shooting arose out of a fight and the type of
firearm used to cause the injury (here, handguns vs. rifles and shotguns). With an alpha of .01, conduct a five-step hypothesis
test to determine if the variables are independent.

299

11. Continuing with an examination of gunshots resulting from fights, we can analyze FISS data to determine whether there is a
relationship between victims’ genders and whether their injuries were the result of fights. With an alpha of .05, conduct a
five-step hypothesis test to determine if the variables are independent.

12. In the chapter, we saw that there was a statistically significant difference between men and women in terms of their attitudes
about capital punishment. We can extend that line of inquiry and find out whether there is a gender difference in general
attitudes about crime and punishment. The GSS asks respondents whether they think courts are too harsh, about right, or
not harsh enough in dealing with criminal offenders. The following table contains the data. With an alpha level of .05,
conduct a five-step chi-square hypothesis test to determine whether the two variables are independent.

13. Do men and women differ on their attitudes toward drug laws? The GSS asks respondents to report whether they think
marijuana should be legalized. The following table shows the frequencies, by gender, among black respondents. With an
alpha level of .05, conduct a five-step chi-square hypothesis test to determine whether the two variables are independent.

14. The following table shows the support for marijuana legalization, by race, among male respondents. With an alpha level of
.05, conduct a five-step chi-square hypothesis test to determine whether the two variables are independent.

15. There is some concern that people of lower-income statuses are more likely to come in contact with the police as compared
to higher-income individuals. The following table contains Police–Public Contact Survey (PPCS) data on income and police
contacts among respondents who were 21 years of age or younger. With an alpha of .01, conduct a five-step hypothesis test

300

to determine if the variables are independent.

16. One criticism of racial profiling studies is that people’s driving frequency is often unaccounted for. This is a problem because,
all else being equal, people who spend more time on the road are more likely to get pulled over eventually. The following
table contains PPCS data narrowed down to black male respondents. The variables measure driving frequency and whether
these respondents had been stopped by police for traffic offenses within the past 12 months. With an alpha of .01, conduct a
five-step hypothesis test to determine if the variables are independent.

17. The companion website (www.sagepub.com/gau) contains the SPSS data file GSS for Chapter 10.sav. This is a portion of the
2014 GSS. Two of the variables in this file are race and courts , which capture respondents’ race and their attitudes about
courts’ harshness, respectively. Run a chi-square analysis to determine if people’s attitudes (the DV) vary by race (the IV).
Then do the following.

1. Identify the obtained value of the chi-square statistic.
2. Make a decision about whether you would reject the null hypothesis of independence at an alpha level of .05 and

explain how you arrived at that decision.
3. State the conclusion that you draw from the results of each of these analyses in terms of whether there is a

relationship between the two variables.
4. If you rejected the null hypothesis , interpret row percentages and applicable measures of association. How strong is the

relationship? Would you say that this is a substantively meaningful relationship?
18. Using GSS for Chapter 10.sav (www.sagepub.com/gau), run a chi-square analysis to determine whether there is a relationship

between the candidate people voted for in the 2012 presidential election (Barack Obama or Mitt Romney) and their opinions
about how well elected officials are doing at controlling crime rates than do the following:

1. Identify the obtained value of the chi-square statistic.
2. Make a decision about whether you would reject the null hypothesis of independence at an alpha level of .01 and

explain how you arrived at that decision.
3. State the conclusion that you draw from the results of each of these analyses in terms of whether there is a

relationship between the two variables.
4. If you rejected the null hypothesis , interpret row percentages and applicable measures of association. How strong is the

relationship? Would you say that this is a substantively meaningful relationship?
19. A consistent finding in research on police–community relations is that there are racial differences in attitudes toward police.

Although all racial groups express positive views of police overall, the level of support is highest for whites and tends to
dwindle among persons of color. The companion website (www.sagepub.com/gau) contains variables from the PPCS (PPCS

301

http://www.sagepub.com/gau

http://www.sagepub.com/gau

http://www.sagepub.com/gau

for Chapter 10.sav) . The sample has been narrowed to males who were stopped by the police while driving a car and were
issued a traffic ticket. There are three variables in this data set: race , income , and legitimacy. The legitimacy variable measures
whether respondents believed that the officer who pulled them over had a credible reason for doing so. Use SPSS to run a
chi-square analysis to determine whether legitimacy judgments (the DV) differ by race (the IV). Based on the variables’ level
of measurement, select appropriate measures of association. Then do the following:

1. Identify the obtained value of the chi-square statistic.
2. Make a decision about whether you would reject the null hypothesis of independence at an alpha level of .01 and

explain how you arrived at that decision.
3. State the conclusion that you draw from the results of each of these analyses as to whether or not there is a difference

between private and public prisons in terms of offering vocational training.
4. If you rejected the null hypothesis , interpret row percentages and applicable measures of association. How strong is the

relationship? Would you say that this is a substantively meaningful relationship?
20. Using the PPCS for Chapter 10.sav file again (www.sagepub.com/gau), run a chi-square test to determine whether

respondents’ perceptions of stop legitimacy (the DV) vary across income levels (the IV). Based on the variables’ level of
measurement, select appropriate measures of association. Then do the following:

1. Identify the obtained value of the chi-square statistic.
2. Make a decision about whether you would reject the null hypothesis of independence at an alpha level of .01 and

explain how you arrived at that decision.
3. State the conclusion that you draw from the results of each of these analyses in terms of whether there is a difference

between private and public prisons in terms of offering vocational training.
4. If you rejected the null hypothesis , interpret row percentages and applicable measures of association. How strong is the

relationship? Would you say that this is a substantively meaningful relationship?

302

http://www.sagepub.com/gau

Key Terms

Chi-square test of independence 219
Nonparametric statistics 219
Parametric statistics 220
Statistical independence 221
Statistical dependence 221
χ² distribution 222
Obtained value 223
Observed frequencies 224
Expected frequencies 224
Statistical significance 227
Measures of association 234
Cramer’s V 235
Lambda 235
Phi 237
Goodman and Kruskal’s gamma 237
Kendall’s taub 237

Kendall’s tauc 237

Somers’ d 237
p value 238

Glossary of Symbols and Abbreviations Introduced in This Chapter

303

Chapter 11 Hypothesis Testing With Two Population Means or
Proportions

304

Learning Objectives
Identify situations in which, based on the levels of measurement of the independent and dependent variables, t tests are
appropriate.
Explain the logic behind two-population tests for differences between means and proportions.
Explain what the null hypothesis predicts and construct an alternative or research hypothesis appropriate to a particular research
question.
For tests of means, identify the correct type of test (dependent or independent samples).
For tests of means with independent samples, identify the correct variance formula (pooled or separate).
Select the correct equations for a given test type, and use them to conduct five-step hypothesis tests.
In SPSS, identify the correct type of test, run that analysis, and interpret the output.

There are many situations in which criminal justice and criminology researchers work with categorical
independent variables (IVs) and continuous dependent variables (DVs). They might want to know, for
instance, whether male and female police officers differ in the number of arrests they make, whether criminal
offenders who have children are given shorter jail sentences compared to those who do not, or whether
prisoners who successfully complete a psychological rehabilitation program have a significant reduction in
antisocial thinking. Research Example 11.1 illustrates another instance of a categorical IV and a continuous
DV.

In Research Example 11.1, Wright, Pratt, and DeLisi (2008) had two groups (multiple homicide offenders
[MHOs] and single homicide offenders [SHOs]), each with its own mean and standard deviation; the goal
was to find out whether the groups’ means differ significantly from one another. A significant difference
would indicate that MHOs and SHOs do indeed differ in the variety of crimes they commit, whereas rough
equivalency in the means would imply that these two types of homicide offenders are equally diverse in
offending. What should the researchers do to find out whether MHOs and SHOs have significantly different
diversity indices?

The answer is that they should conduct a two-population test for differences between means, or what is
commonly referred to as a t test. As you probably figured out, these tests rely on the t distribution. We will
also cover two-population tests for differences between proportions, which are conceptually similar to t tests
but employ the z distribution.

t test: The test used with a two-class, categorical independent variable and a continuous dependent variable.

Tests for differences between two means or two proportions are appropriate when the IV is categorical with
two classes or groups and the DV is expressed as one mean or proportion per group. Examples of two-class,
categorical IVs include gender (male or female) and political orientation (liberal or conservative) . Examples of
DVs appropriate for two-population tests are the mean number of times people in a sample report that they
drove while intoxicated, or the proportion of people in the sample who have been arrested for driving while
under the influence of alcohol. The diversity index that Wright et al. (2008) used is continuous, which is why
the researchers computed a mean and a standard deviation, and the reason that a t test is the appropriate

305

analytic strategy. In the review problems at the end of the chapter, you will conduct a t test to find out
whether MHOs and SHOs differ significantly in offending diversity.

Research Example 11.1 Do Multiple Homicide Offenders Specialize in Killing?

Serial killers and mass murderers capture the public’s curiosity and imagination. Who can resist some voyeuristic gawking at a killer
who periodically snuffs out innocent victims while outwardly appearing to be a regular guy or at the tormented soul whose troubled
life ultimately explodes in an episode of wanton slaughter? Popular portrayals of multiple homicide offenders (MHOs) lend the
impression that these killers are fundamentally different from more ordinary criminals and from single homicide offenders (SHOs)
in that they only commit homicide and lead otherwise crime-free lives. But is this popular conception true?

Wright, Pratt, and DeLisi (2008) decided to find out. They constructed an index measuring diversity of offending within a sample
of homicide offenders. This index captured the extent to which homicide offenders committed only homicide and no other crimes
versus the extent to which they engaged in various types of illegal acts. The researchers divided the sample into MHOs and SHOs
and calculated each group’s mean and standard deviation on the diversity index. They found the statistics located in the table.

Source: Adapted from Table 1 in Wright et al. (2008).

306

Two-Population Tests for Differences Between Means: t Tests

There are many situations in which people working in criminal justice and criminology would want to test for
differences between two means: Someone might be interested in finding out whether offenders who are
sentenced to prison receive significantly different mean sentence lengths depending on whether they are male
or female. A municipal police department might implement an innovative new policing strategy and want to
know whether the program significantly reduced mean crime rates in the city. These types of studies require t
tests.

In Chapter 7, you learned the difference between sample, sampling, and population distributions. Recall that
sampling distributions are theoretical curves created when multiple or infinite samples are drawn from a single
population and a statistic is computed and plotted for each sample. Over time, with repeated drawing,
calculating, plotting, throwing back, and drawing again, the distribution of sample statistics builds up and, if
the size of each sample is large (meaning N ≥ 100), the statistics form a normal curve. There are also sampling
distributions for differences between means. See Figure 11.1. You have seen in prior chapters how sampling
distributions’ midpoint is the population mean (symbolized μ) . Sampling distributions for differences
between means are similar in that they center on the true population difference, μ1 − μ2. A sampling

distribution of differences between means is created by pulling infinite pairs of samples, rather than single
samples. Imagine drawing two samples, computing both means, subtracting one mean from the other to form
a difference score, and then plotting that difference score. Over time, the difference scores build up. If N ≥
100, the sampling distribution of differences between means is normal; if N ≤ 99, then the distribution is more
like “normalish” because it tends to be wide and flat. The t distribution, being flexible and able to
accommodate various sample sizes, is the probability distribution of choice for tests of differences between
means.

There are two general types of t tests, one for independent samples and one for dependent samples. The
difference between them pertains to the method used to select the two samples under examination. In
independent sampling designs, the selection of cases into one sample in no way affects, or is affected by, the
selection of cases into the other sample. If a researcher is interested in the length of prison sentences received
by male and female defendants, then that researcher would draw a sample of females and a sample of males. A
researcher investigating the effects of judicial selection type on judges’ sentencing decisions might draw a
sample of judges who were elected and a sample of those who were appointed to their posts. In neither of
these instances does the selection of one person into one sample have bearing on the selection of another into
the other sample. They are independent because they have no influence on each other.

Independent samples: Pairs of samples in which the selection of people or objects into one sample in no way affected, or was affected
by, the selection of people or objects into the other sample.

Dependent samples: Pairs of samples in which the selection of people or objects into one sample directly affected, or was directly
affected by, the selection of people or objects into the other sample. The most common types are matched pairs and repeated
measures.

307

In dependent-samples designs, by contrast, the two samples are related to each other in some way. The two
major types of dependent-samples designs are matched pairs and repeated measures. Matched-pairs designs
are used when researchers need an experimental group and a control group but are unable to use random
assignment to create the groups. They therefore gather a sample from a treatment group and then construct a
control group via the deliberate selection of cases that did not receive the treatment but that are similar to the
treatment group cases on key characteristics. If the unit of analysis is people, participants in the control group
might be matched to the treatment group on race, gender, age, and criminal history.

Matched-pairs design: A research strategy where a second sample is created on the basis of each case’s similarity to a case in an
existing sample.

Figure 11.1 The Sampling Distribution of Differences Between Means

Repeated-measures designs are commonly used to evaluate program impacts. These are before-and-after
designs wherein the treatment group is measured prior to the intervention of interest and then again afterward
to determine whether the post-intervention scores differ significantly from the pre-intervention scores. In
repeated measures, then, the “two” samples are actually the same people or objects measured twice.

Repeated-measures design: A research strategy used to measure the effectiveness of an intervention by comparing two sets of scores
(pre and post) from the same sample.

The first step in deciding what kind of t test to use is to figure out whether the samples are independent or
dependent. It is sometimes easier to identify dependent designs than independent ones. The biggest clue to
look for is a description of the research methods. If the samples were collected by matching individual cases
on the basis of similarities between them, or if an intervention was being evaluated by collecting data before
and after a particular event, then the samples are dependent and the dependent-samples t test is appropriate.
If the methods do not detail a process of matching or of repeated measurement, if all that is said is that two
samples were collected or a single sample was divided on the basis of a certain characteristic to form two
subsamples, then you are probably dealing with independent samples and should use the independent-samples
t test.

There is one more wrinkle. There are two types of independent-samples t tests: pooled variances and separate
variances. The former is used when the two population variances are similar to one another, whereas the latter
is used when the variances are significantly disparate. The rationale for having these two options is that when

308

two samples’ variances are similar, they can safely be combined (pooled) into a single estimate of the
population variance. When they are markedly unequal, however, they must be mathematically manipulated
before being combined. You will not be able to tell merely by looking at two samples’ variances whether you
should use a pooled-variance or separate-variance approach, but that is fine. In this book, you will always be
told which one to use. When we get to SPSS, you will see that this program produces results from both of
these tests, along with a criterion to use for deciding between them. (More on this later.) You will, therefore,
always be able to figure out which type of test to use.

Pooled variances: The type of t test appropriate when the samples are independent and the population variances are equal.

Separate variances: The type of t test appropriate when the samples are independent and the population variances are unequal.

309

Learning Check 11.1

Be very careful about order of operations! The formulas we will encounter in this chapter require multiple steps, and you have to do those
steps in proper sequence or you will arrive at an erroneous result. Remember “P lease E xcuse M y D ear A unt S ally”? This mnemonic
device reminds you to use the order parentheses, exponents, multiplication, division, addition, subtraction. Your calculator automatically
employs proper order of operations, so you need to insert parentheses where appropriate so that you can direct the sequence. To illustrate

this, type the following into your calculator: 3 + 4/2 and (3 + 4) /2. What answers did your calculator produce? Now try −32 and (−3) 2.
What are the results?

This has probably all gotten somewhat murky. Luckily, there is a simple set of steps you can follow anytime
you encounter a two-population test that will help you determine which type of t test to use. This mental
sequence is depicted in Figure 11.2.

In this chapter, we will encounter something we have touched on before but have not addressed in detail: one-
tailed tests versus two-tailed tests. We discussed the t distribution in Chapters 7 and 8. The t distribution is
symmetric and has positive and negative sides. In two-tailed tests, there are two critical values, one positive
and one negative. You learned in Chapter 8 that confidence intervals are always two-tailed. In t tests, by
contrast, some analyses will be two-tailed and some will be one-tailed. Two-tailed tests split alpha (⍺) in half
such that half is in each tail of the distribution. The critical value associated with that ⍺ is both positive and
negative. One-tailed tests, by contrast, place all ⍺ into a single tail. One-tailed tests have ⍺ in either the upper
(or positive) tail or lower (or negative) tail, depending on the specific question under investigation. The critical
value of a one-tailed test is either positive or negative.

One-tailed tests: Hypothesis tests in which the entire alpha is placed in either the upper (positive) or lower (negative) tail such that
there is only one critical value of the test statistic. Also called directional tests.

The choice of a one-tailed test versus a two-tailed test is generally made on a case-by-case basis. It depends on
whether a researcher has a good reason to believe that the relationship under examination should be positive
or negative. Suppose that you are studying an in-prison treatment program that focuses on improving
participants’ literacy skills. You would measure their reading levels before the program began and then again
after it had ended, and would expect to see an increase—you have good reason to predict that post-
intervention literacy skills would be greater than pre-intervention ones. In this case, a one-tailed test would be
in order (these are also called directional tests, since a prediction is being made about the direction of a
relationship). Now suppose you want to know whether that literacy program works better for men or for
women. You do not have any particular reason for thinking that it would be more effective for one group than
the other, so you set out merely to test for any difference at all, regardless of direction. This would be cause for
using a two-tailed test (also called a nondirectional test). Let us work our way through some examples and
discuss one-tailed and two-tailed tests as we go.

Figure 11.2 Steps for Deciding Which t Test to Use

310

311

Independent-Samples t Tests

A Two-Tailed Test With Equal Population Variances: Transferred Female
Juveniles’ Ethnicity and Mean Age of Arrest

There is a theoretical and empirical connection between how old people are when they start committing
delinquent offenses (this is called the age of onset) and their likelihood of continuing law-breaking behavior in
adulthood. All else being equal, younger ages of onset are associated with greater risks of adult criminal
activity. Using the Juvenile Defendants in Criminal Court (JDCC) data set (Data Sources 11.1), we can test
for a significant difference in the mean age at which juveniles were arrested for the offense that led to them
being transferred to adult court. To address questions about gender and ethnicity, the sample is narrowed to
females, and we will test for an age difference between Hispanics and non-Hispanic whites in this subsample.
Among Hispanic female juveniles in the JDCC sample (N = 44), the mean age of arrest was 15.89 years (s =
1.45). Among non-Hispanic whites (N = 31), the mean age of arrest was 16.57 years (s = 1.11). Since the IV is
race (Hispanic; white) and the DV is age at arrest (years) , a t test is the proper analytic strategy. We will use an
⍺ level of .05, a presumption that the population variances are equal, and the five steps of hypothesis testing.

Data Sources 11.1 Juvenile Defendants in Criminal Courts

The JDCC is a subset of the Bureau of Justice Statistics’ (BJS) State Court Processing series that gathers information on defendants
convicted of felonies in large, urban counties. BJS researchers pulled information about juveniles charged with felonies in 40 of these
counties in May 1998. Each case was tracked through disposition. Information about the juveniles’ demographics, court processes,
final dispositions, and sentences was recorded. Due to issues with access to and acquisition of data in some of the counties, the
JDCC is a nonprobability sample, and conclusions drawn from it should therefore be interpreted cautiously (BJS, 1998).

It is useful in an independent-samples t test to first make a table that lays out the relevant pieces of
information that you will need for the test. Table 11.1 shows these numbers. It does not matter which sample
you designate Sample 1 and which you call Sample 2 as long as you stick with your original designation
throughout the course of the hypothesis test. Since it is easy to simply designate the samples in the order in
which they appear in the problem, let us call Hispanic females Sample 1 and white females Sample 2.

We will use a two-tailed test because we have no solid theoretical reason for thinking that non-Hispanic
whites’ mean would be greater than Hispanics’ or vice versa. The alternative hypothesis will merely specify a
difference (i.e., an inequality) between the means, with no prediction about which one is greater than or less
than the other.

312

Step 1. State the null (H0) and alternative (H1) hypotheses .

In t tests, the null (H0) and alternative (H1) are phrased in terms of the population means. Recall that

population means are symbolized μ (the Greek letter mu , pronounced “mew”). We use the population
symbols rather than the sample symbols because the goal is to make a statement about the relationship, or lack
thereof, between two variables in the population. The null hypothesis for a t test is that the means are equal:

H0: μ1 = μ2

Equivalence in the means suggests that the IV is not exerting an impact on the DV. Another way of thinking
about this is that H0 predicts that the two samples came from the same population. In the context of the

present example, retaining the null would indicate that ethnicity does not affect female juveniles’ age of arrest
(i.e., that all female juveniles are part of the same population).

The alternative or research hypothesis is that there is a significant difference between the population means or,
in other words, that there are two separate populations, each with its own mean:

H1: µ1 ≠ µ2

Rejecting the null would lead to the conclusion that the IV does affect the DV; here, it would mean that
ethnicity does appear related to age of arrest. Note that this phrasing of the alternative hypothesis is specific to
two-tailed tests—the “not equal” sign implies no prediction about the direction of the difference. The
alternative hypothesis will be phrased slightly differently for one-tailed or directional tests.

Step 2. Identify the distribution and compute the degrees of freedom .

As mentioned earlier, two-population tests for differences between means employ the t distribution. The t
distribution, you should recall, is a family of curves that changes shape depending on degrees of freedom (df) .
The t distribution is normal at large df and gets wider and flatter as df declines.

The df formula differs across the three types of t tests, so you have to identify the proper test before you can
compute the df. Using the sequence depicted in Figure 11.2, we know (1) that the samples are independent
because this is a random sample divided into two groups and (2) that the population variances are equal. This
leads us to choose the pooled-variances t test. The df formula is

where

N1 = the size of the first sample and

N2 = the size of the second sample.

Pulling the sample sizes from Table 11.1,

313

df = 44 + 31 − 2 = 73

Step 3. Identify the critical value and state the decision rule .

Three pieces of information are required to find the critical value of t (tcrit) using the t table: the number of

tails in the test, the alpha level, and the df. The exact df value of 73 does not appear on the table, so we use the
value that is closest to it, which is 60. With two tails, an ⍺ of .05, and 73 degrees of freedom, we see that tcrit

is 2.000.

This is not the end of finding the critical value, though, because we still have to figure out the sign or signs;
that is, we need to determine whether tcrit is positive, negative, or both. A one-tailed test has only one critical

value and it is either positive or negative. In two-tailed tests, there are always two critical values. Their
absolute values are the same, but one is negative and one is positive. Figure 11.3 illustrates this.

Given that there are two tails in this current test and, therefore, two critical values, tcrit = ±2.000. The decision

rule is stated thus: If tobt is either greater than 2.000 or less than −2.000 , H0 will be rejected. The decision rule

has to be stated as an “either/or” proposition because of the presence of two critical values. There are, in
essence, two ways for the null to be rejected: tobt could be out in the right tail beyond 2.000 or out in the left

tail beyond −2.000.

Figure 11.3 The Critical Values for a Two-Tailed Test With ⍺ = .05 and df = 73

Step 4. Calculate the obtained value of the test statistic .

The formulas for the obtained value of t (tobt) vary across the three different types of t tests; however, the

common thread is to have (1) a measure of the difference between means in the numerator and (2) an estimate
of the standard error of the sampling distribution of differences between means in the denominator
(remember that the standard error is the standard deviation of a sampling distribution). The estimated

standard error is symbolized and the formula for estimating it with pooled variances is

This formula might look a bit daunting, but keep in mind that it comprises only sample sizes and standard
deviations, both of which are numbers you are accustomed to working with. The most important thing is to

314

work through the formula carefully. Plug the numbers in correctly, use proper equation-solving techniques
(including order of operations), and round correctly. Entering the numbers from our example yields

=1.32 (.22)

= .29

This is our estimate of the standard error (i.e., the standard deviation of the sampling distribution). Recall that
this is not tobt! Be careful. The next step is to plug the standard error into the tobt formula. This formula is

Using our numbers, we perform the calculation:

This is the final answer! The obtained value of t is −2.34. Step 4 is done.

Step 5. Make a decision about the null, and state the substantive conclusion .

We said in the decision rule that if tobt was either greater than 2.000 or less than −2.000, the null would be

rejected. So, what will we do? If you said, “Reject the null,” you are correct. Since tobt is less than −2.000, the

null is rejected. The conclusion is that non-Hispanic white and Hispanic female juveniles transferred to adult
court differ significantly in terms of mean age of arrest for their current offense. Another way to think about it
is that there is a relationship between ethnicity and mean age at arrest. Looking at the means, it is clear that
Hispanic youths were younger, on average (mean = 15.89), than non-Hispanic white youths were (mean =
16.57).

The interpretation of a significant result in a two-tailed test is complicated and must be done carefully. In the
present example, we used a two-tailed test because we did not have sufficient theory or prior empirical
evidence to make a defensible prediction about the direction of the difference. The fact that we ultimately

315

found that Hispanics’ mean age was lower is not enough to arrive at a conclusion about the reason for this.
Never let your data drive your thinking—do not construct a reality around an empirical finding. In an
exploratory analysis, it is best to avoid speculating about the larger implications of your results. The more
scientifically sound approach is to continue this line of research to uncover potential reasons why Hispanic
females’ age of arrest might be lower than non-Hispanic white females’ and then collect more data to test this
prediction.

A One-Tailed Test With Unequal Population Variances: Trial Types and
Sentence Lengths

For the second t -test example, we will again use the JDCC data on juveniles diverted to adult criminal courts.
Let’s consider whether the length of time it takes for a juvenile’s case to be disposed of is affected by the type
of attorney the juvenile has. The prediction will be that juvenile defendants who retain private counsel
experience significantly longer time-to-disposition compared to those who use the services of public
defenders. This, theoretically, is because private attorneys might file more pretrial motions and spend more
time negotiating with the prosecutor and the judge. The sample is narrowed to juveniles charged with violent
offenses who were not released pending trial and who were ultimately sentenced to prison. Those with private
attorneys (N = 36) had a mean of 7.93 months to disposition (s = 4.53), whereas those represented by public
defenders (N = 234) experienced a mean of 6.36 months (s = 3.66). We will call the defendants who retained
private attorneys Sample 1 and those who were represented by public defenders Sample 2. Using an alpha
level of .01 and the assumption that the population variances are unequal, we will conduct a five-step
hypothesis test to determine whether persons convicted by juries receive significantly longer prison sentences.
Table 11.2 shows the numbers we will need for the analysis.

Step 1. State the null (H0) and (H1) alternative hypotheses .

The null hypothesis is the same as that used above (H0: µ1 = µ2) and reflects the prediction that the two means

do not differ. The alternative hypothesis used in the previous example, however, does not apply in the present
context because this time, we are making a prediction about which mean will be greater than the other. The
nondirectional sign (≠) must therefore be replaced by a sign that indicates a specific direction. This will either
be a greater than (>) or less than (<) sign. We are predicting that jury trials will result in significantly longer mean sentences, so we can conceptualize the hypothesis as private attorney disposition time > public defender
disposition time. Since defendants represented by private attorneys are Sample 1 and those by public defenders
Sample 2, the alternative hypothesis is

H1: μ1 > μ2

316

Step 2. Identify the distribution and compute the degrees of freedom .

The distribution is still t , but the df equation for unequal population variances differs sharply from that for
equal variances because the situation of unequal variances mandates the use of the separate-variances t test.
The df formula is obnoxious, but as with the prior formulas we have encountered, you have everything you
need to solve it correctly—just take care to plug in the right numbers and use proper order of operations.

Plugging in the correct numbers from the current example,

317

= 40

Step 2 is complete; df = 40. We can use this df to locate the critical value of the test statistic.

Step 3. Identify the critical value and state the decision rule .

With one tail, an ⍺ of .01, and 40 degrees of freedom, tcrit = 2.423. The sign of the critical value is positive

because the alternative hypothesis predicts that > . Revisit Figure 11.1 for an illustration. When the

alternative predicts that , the critical value will be on the left (negative) side of the distribution and when

the alternative is that > , tcrit will be on the right (positive) side. The decision rule is that if tobt is

greater than 2.423 , H0 will be rejected.

Step 4. Compute the obtained value of the test statistic .

As before, the first step is to obtain an estimate of the standard error of the sampling distribution. Since the
population variances are unequal, the separate variances version of independent-samples t must be used. The
standard error formula for the difference between means in a separate-variance t test is

Plugging in the numbers from the present example yields

Now, the standard error estimate can be entered into the same tobt formula used with the pooled-variances t

test. Using Formula 11(3),

Step 5. Make a decision about the null and state the substantive conclusion .

The decision rule stated that the null would be rejected if tobt exceeded 2.423. Since tobt ended up being 1.94,

the null is retained. Juveniles who had privately retained attorneys did not experience a statistically significant
increase in the amount of time it took for their cases to be resolved, compared to juveniles who had public
defenders. Another way to say this is that there is no relationship between attorney type and disposition time.

318

Dependent-Samples t Tests

The foregoing discussion centered on the situation in which a researcher is working with two independently
selected samples; however, as described earlier, there are times when the samples under examination are not
independent. The main types of dependent samples are matched pairs and repeated measures. Dependent
samples require a t formula different from that used when the study samples are independent because of the
manipulation entailed in selecting dependent samples. With dependent-samples t , the sample size (N) is not
the total number of people or objects in the sample but, rather, the number of pairs being examined. We will
go through an example now to demonstrate the use of this t test.

Research Example 11.2 Do Mentally Ill Offenders’ Crimes Cost More?

There is ongoing debate about the precise role of mental illness (MI) in offending. Some individuals with MI are unstable or
antisocial, but it would be wrong to stigmatize an entire group based on the behavior of a few members. One question that could be
asked is whether the crimes of offenders with MI exact a particularly heavy toll on taxpayers relative to offenders who do not have
MI. Ostermann and Matejkowski (2014) collected data on all persons released from New Jersey prisons in 2006. The data included
the number of times each ex-prisoner was rearrested within a 3-year follow-up period. First, the researchers divided the group
according to whether or not each person had received a MI diagnosis and then calculated the average cost of each group’s recidivism.
The results showed that MI offenders’ crimes were nearly three times more expensive compared to non-MI offenders. Next, the
authors matched the sample of MI offenders to a subsample of non-MI offenders on the basis of each person’s demographic
characteristics and offense histories and then recalculated each group’s average cost. The results changed dramatically. After the one-
to-one matching procedure, the non-MI group’s average cost was more than double that of the MI group. It turns out that the initial
results—the ones suggesting that MI offenders’ crimes are much more expensive—are misleading. It is not good policy to use the
mere existence of MI as cause to enhance supervision or restrictions on ex-prisoners. What should be done instead is to focus on the
risk factors that are associated with recidivisms among both MI and non-MI offenders. This policy focus would cut costs and create
higher levels of social justice.

Dependent-Samples t Test: Female Correctional Officers and Institutional
Security

The traditionally male-dominated field of correctional security is gradually being opened to women who wish
to work in jails and prisons, yet there are lingering concerns regarding how well female correctional officers
can maintain order in male institutions. Critics claim that women are not as capable as men when it comes to
controlling inmates, which threatens the internal safety and security of the prison environment. Let us test the
hypothesis that facilities with relatively small percentages of female security staff will have lower inmate-on-
staff assault rates relative to those institutions with high percentages of female security staff because security in
the latter will be compromised. We will use data from the Census of Jails (COJ; Data Sources 3.1) and an
alpha level of .05. The first sample consists of five jails with below-average percentages of female security staff,
and the second sample contains five prisons selected on the basis of each one’s similarity to a prison in the first
sample (i.e., the second sample’s prisons are all male, state-run, maximum-security facilities in Texas with
inmate totals similar to those of the first sample). The difference between the samples is that the second
sample has above-average percentages of female security staff. Table 11.3 contains the raw data.

Step 1. State the null (N0) and (N1) alternative hypotheses .

319

The null, as is always the case with t tests, is H0: µ1 = µ2. It is being suggested in this problem that low-

percentage female jails (i.e., jails with greater percentages of male staff) should have lower assault rates than
one to look at. In Figure 11.6, F = .126 and p = .724, so the null is retained and the variances are equal. The
obtained value of t is −2.203, which is close to the value we arrived at by hand (−2.34). The p value is .031,
which is less than .05, so judging by this output, the null should be rejected. (Typically, the alpha level .05 is
used when interpreting SPSS output. Unless there is good reason to do otherwise, researchers generally

6.93, the null will be rejected. The decision rule in ANOVA is always phrased using a greater than inequality
because the F distribution contains only positive values, so the critical region is always in the right-hand tail.

Step 4. Compute the obtained value of the test statistic.

Step 4 entails a variety of symbols and abbreviations, all of which are listed and defined in Table 12.2. Stop for

345

a moment and study this chart. You will need to know these symbols and what they mean in order to
understand the concepts and formulas about to come.

You already know that each group has a sample size (nk ) and that the entire sample has a total sample size

(N) . Each group also has its own mean ( ), and the entire sample has a grand mean ( ). These sample
sizes and means, along with other numbers that will be discussed shortly, are used to calculate the three types
of sums of squares. The sums of squares are then used to compute mean squares, which, in turn, are used to
derive the obtained value of F. We will first take a look at the formulas for the three types of sums of squares:
total (SST ), between-group (SSB) , and within-group (SSW ).

where

= the sum of all scores i in group k ,

= the sum of each group total across all groups in the sample,

x = the raw scores, and

N = the total sample size across all groups.

where

nk = the number of cases in group k ,

= the mean of group k , and

= the grand mean across all groups.

The double summation signs in the SST formula are instructions to sum sums. The i subscript denotes

individual scores and k signifies groups, so the double sigmas direct you to first sum the scores within each
group and to then add up all the group sums to form a single sum representing the entire sample.

Sums of squares are measures of variation. They calculate the amount of variation that exists within and
between the groups’ raw scores, squared scores, and means. The SSB formula should look somewhat familiar

—in Chapters 4 and 5, we calculated deviation scores by subtracting the sample mean from each raw score.
Here, we are going to subtract the grand mean from each group mean. See the connection? This strategy

346

produces a measure of variation. The sums of squares provide information about the level of variability within
each group and between the groups.

The easiest way to compute the sums of squares is to use a table. What we ultimately want from the table are
(a) the sums of the raw scores for each group, (b) the sums of each group’s squared raw scores, and (c) each
group’s mean. All of these numbers are displayed in Table 12.3.

We also need the grand mean, which is computed by summing all of the raw scores across groups and dividing
by the total sample size N , as such:

Here,

With all of this information, we are ready to compute the three types of sums of squares, as follows. The
process begins with SST :

= 715 – 528.07

= 186.93

347

Then it is time for the between-groups sums of squares:

= 5(4.20 − 5.93)2 = 4(3.75 − 5.93)2 + 6(8.83 − 5.93)2

=5(−1.73)2 + 4(−2.18)2 + 6(2.90)2

= 5(2.99) + 4(4.75) + 6(8.41)

= 14.95 + 19.00 + 50.46

= 84.41

Next, we calculate the within-groups sums of squares:

SSW = SST − SSB = 186.93 − 84.41 = 102.52

348

Learning Check 12.1

A great way to help you check your math as you go through Step 4 of ANOVA is to remember that the final answers for any of the sums
of squares, mean squares, or Fobt will never be negative. If you get a negative number for any of your final answers in Step 4, you will

know immediately that you made a calculation error, and you should go back and locate the mistake. Can you identify the reason why all

We now have what we need to compute the mean squares (symbolized MS) . Mean squares transform sums of
squares (measures of variation) into variances by dividing SSB and SSw by their respective degrees of freedom,

dfB and dfw . This is a method of standardization. The mean squares formulas are

Plugging in our numbers,

We now have what we need to calculate Fobt . The F statistic is the ratio of between-group variance to within-

group variance and is computed as

Inserting the numbers from the present example,

Step 4 is done! Fobt = 4.94.

Step 5. Make a decision about the null hypothesis and state the substantive conclusion.

The decision rule stated that if the obtained value exceeded 6.93, the null would be rejected. With an Fobt of

4.94, the null is retained. The substantive conclusion is that there is no significant difference between the
groups in terms of sentence length received. In other words, male juvenile weapons offenders’ jail sentences do
not vary as a function of the type of attorney they had. That is, attorney type does not influence jail sentences.
This finding makes sense. Research is mixed with regard to whether privately retained attorneys (who cost

349

defendants a lot of money) really are better than publicly funded defense attorneys (who are provided to
indigent defendants for free). While there is a popular assumption that privately retained attorneys are better,
the reality is that publicly funded attorneys are frequently as or even more skilled than private ones are.

We will go through another ANOVA example. If you are not already using your calculator to work through
the steps as you read and make sure you can replicate the results obtained here in the book, start doing so.
This is an excellent way to learn the material.

For the second example, we will study handguns and murder rates. Handguns are a prevalent murder weapon
and, in some locations, they account for more deaths than all other modalities combined. In criminal justice
and criminology researchers’ ongoing efforts to learn about violent crime, the question arises as to whether
there are geographical differences in handgun-involved murders. Uniform Crime Report (UCR) data can be
used to find out whether there are significant regional differences in handgun murder rates (calculated as the
number of murders by handgun per 100,000 residents in each state). A random sample of states was drawn,
and the selected states were divided by region. Table 12.4 contains the data in the format that will be used for
computations. Alpha will be set at .05.

Step 1. State the null (H0) and alternative (H1) hypotheses.

H0: µ1 = µ2 = µ3 = µ4

H1: some µi ≠ some µj

Step 2. Identify the distribution and compute the degrees of freedom.

This being an ANOVA, the F distribution will be employed. There are four groups, so k = 4. The total
sample size is N = 5 + 5 + 7 + 6 = 23. Using Formulas 12(1) and 12(2), the degrees of freedom are

dfB = 4 − 1 = 3

dfW = 23 − 4 = 19

Step 3. Identify the critical value and state the decision rule.

With ⍺ = .05 and the earlier derived df values, Fcrit = 3.13. The decision rule states that if Fobt > 3.13 , H0 will

be rejected.

350

Step 4. Calculate the obtained value of the test statistic.

We begin by calculating the total sums of squares:

= 78.75 − 49.32

= 29.43

Before computing the between-groups sums of squares, we need the grand mean:

Now SSB can be calculated:

SSB = 5( 93 −1 46) 2 + 5( 88 −1 46) 2 + 7(2 67 −1 46) 2 + 6(1 00 −1 46) 2

=5(−.53)2 + 5(−.58)2 + 7(1.21)2 + 6(−.46)2

= 5(.28) + 5(.34) + 7(1.46) + 6(.21)

= 1.40 + 1.70 + 10.22 + 1.26

351

= 14.58

Next, we calculate the within-groups sums of squares:

SSw = 29.43 −14.58 = 14.85

Plugging our numbers into Formulas 12(7) and 12(8) for mean squares gives

Finally, using Formula 12(9) to derive Fobt ,

This is the obtained value of the test statistic. Fobt = 6.23, and Step 4 is complete.

Step 5. Make a decision about the null and state the substantive conclusion .

In Step 3, the decision rule stated that if Fobt turned out to be greater than 3.13, the null would be rejected.

Since Fobt ended up being 6.23, the null is indeed rejected. The substantive interpretation is that there is a

significant difference across regions in the handgun-murder rate.

Research Example 12.2 Are Juveniles Who Are Transferred to Adult Courts Seen as More Threatening?

Recent decades have seen a shift in juvenile-delinquency policy. There has been an increasing zero tolerance sentiment with respect
to juveniles who commit serious offenses. The reaction by most states has been to make it easier for juveniles to be tried as adults,
which allows their sentences to be more severe than they would be in juvenile court. The potential problem with this strategy is that
there is a prevalent stereotype about juveniles who get transferred or waived to adult court: They are often viewed as vicious, cold-
hearted predators. Judges, prosecutors, and jurors might be biased against transferred juveniles, simply because they got transferred.
This means that a juvenile and an adult could commit the same offense and yet be treated very differently by the court, potentially
even ending up with different sentences.

Tang, Nuñez, and Bourgeois (2009) tested mock jurors’ perceptions about the dangerousness of 16-year-olds who were transferred
to adult court, 16-year-olds who were kept in the juvenile justice system, and 19-year-olds in adult court. They found that mock
jurors rated transferred 16-year-olds as committing more serious crimes, being more dangerous, and having a greater likelihood of
chronic offending relative to non-transferred juveniles and to 19-year-olds. The following table shows the means, standard
deviations, and F tests.

Source: Adapted from Table 1 in Tang et al. (2009).

As you can see, all the F statistics were large; the null was rejected for each test. The transferred juveniles’ means are higher than the

352

other two groups’ means for all measures. These results suggest that transferring juveniles to adult court could have serious
implications for fairness. In some cases, prosecutors have discretion in deciding whether to waive a juvenile over to adult court,
which means that two juveniles guilty of similar crimes could end up being treated very differently. Even more concerning is the
disparity between transferred youths and 19-year-olds—it appears that juveniles who are tried in adult court could face harsher
penalties than adults, even when their crimes are the same.

As another example, we will analyze data from the Firearm Injury Surveillance Study (FISS; Data Sources
8.2) to find out whether victim age varies significantly across the different victim–offender relationships.
There are four relationship categories, and a total sample size of 22. Table 12.5 shows the data and
calculations of the numbers needed to complete the hypothesis test. We will proceed using the five steps.
Alpha will be set at .05.

Step 1. State the null (H0) and alternative (H1) hypotheses.

H0: µ1 = µ2 = µ3 = µ4

H1: some µi ≠ some µj

Step 2. Identify the distribution and compute the degrees of freedom.

This being an ANOVA, the F distribution will be employed. There are four groups, so k = 4. The total
sample size is N = 7 + 6 + 4 + 5 = 22. Using Formulas 12(1) and 12(2), the degrees of freedom are

dfB = 4 − 1 = 3

dfW = 22 − 4 = 18

Step 3. Identify the critical value and state the decision rule.

With ⍺ =.05 and the earlier derived df values, Fcrit = 3.16. The decision rule states that if Fobt > 3.16 , H0 will

be rejected.

353

Step 4. Calculate the obtained value of the test statistic.

The total sums of squares for the data in Table 12.5 is

= 18,472 − 17360.18

= 1,111.82

Next, we need the grand mean:

Now SSB can be calculated:

= 7(29 43 − 28 09) 2 + 6(24 17 − 28 09) 2 + 4(40 00 − 28 09) 2 + 5 (21 40 – 28.09) 2

354

=7(1.34)2 + 6(−3.92)2 + 4(11.91)2 + 5(−6.69)2

= 7(1.80) + 6(15.37) + 4(141.85) + 5(44.76)

= 12.60 + 92.22 + 567.40 + 223.80

= 896.02

Next, we calculate the within-groups sums of squares:

SSw = 1,111.82 − 896.02 = 215.80

And the mean squares are

Finally, Fobt is calculated as

And Step 4 is done. Fobt = 24.91.

Step 5. Make a decision about the null and state the substantive conclusion.

In Step 3, the decision rule stated that if Fobt turned out to be greater than 3.16, the null would be rejected.

Since Fobt is 24.91, we reject the null. It appears that victim age does vary across the different victim–offender

relationship categories.

After finding a significant F indicating that at least one group stands out from at least one other one, the
obvious question is, “Which group or groups are different?” We might want to know which region or regions
have a significantly higher or lower rate than the others or which victim–offender relationship or relationships
contain significantly younger or older victims. The F statistic is silent with respect to the location and number
of differences, so post hoc tests are used to get this information. The next section covers post hoc tests and
measures of association that can be used to gauge relationship strength.

355

When the Null Is Rejected: A Measure of Association and Post Hoc Tests

If the null is not rejected in ANOVA, then the analysis stops because the conclusion is that the IVs and DV
are not related. If the null is rejected, however, it is customary to explore the statistically significant results in
more detail using measures of association (MAs) and post hoc tests. Measures of association permit an
assessment of the strength of the relationship between the IV and the DV, and post hoc tests allow
researchers to determine which groups are significantly different from which other ones. The MA that will be
discussed here is fairly easy to calculate by hand, but the post hoc tests will be discussed and then
demonstrated in the SPSS section, because they are computationally intensive.

Omega squared (ω2) is an MA for ANOVA that is expressed as the proportion of the total variability in the
sample that is due to between-group differences. Omega squared can be left as a proportion or multiplied by

100 to form a percentage. Larger values of ω2 indicate stronger IV–DV relationships, whereas smaller values
signal weaker associations. Omega squared is computed as

Omega squared: A measure of association used in ANOVA when the null has been rejected in order to assess the magnitude of the
relationship between the independent and dependent variables. This measure shows the proportion of the total variability in the
sample that is attributable to between-group differences.

Earlier, we found a statistically significant relationship between region and handgun murder rates. Now we

can calculate how strong the relationship is. Using ω2 ,

Omega squared shows that 41% of the total variability in the states’ handgun-murder rates is a function of
regional characteristics. Region appears to be a very important determinate of the prevalence of handgun
murders.

We can do the same for the test showing significant differences in victims’ ages across four different types of
victim–offender relationships. Plugging the relevant numbers into Formula 12(10) yields

This means that 77% of the variability in victims’ ages is attributable to the relationship between the victim
and the shooter. This points to age being a function of situational characteristics. Younger people are more at
risk of firearm injuries in certain types of situations, while older people face greater risk in other
circumstances. Of course, we still do not know which group or groups are significantly different from which
other group or groups. For this, post hoc tests are needed.

356

There are many different types of post hoc tests, so two of the most popular ones are presented here. The first
is Tukey’s honest significant difference (HSD). Tukey’s test compares each group to all the others in a series
of two-variable hypothesis tests. The null hypothesis in each comparison is that both group means are equal;
rejection of the null means that there is a significant difference between them. In this way, Tukey’s is
conceptually similar to a series of t tests, though the HSD method sidesteps the problem of familywise error.

Tukey’s honest significant difference: A widely used post hoc test that identifies the number and location(s) of differences between
groups.

Bonferroni is another commonly used test and owes its popularity primarily to the fact that it is fairly
conservative. This means that it minimizes Type I error (erroneously rejecting a true null) at the cost of
increasing the likelihood of a Type II error (erroneously retaining a false null). The Bonferroni, though, has
been criticized for being too conservative. In the end, the best method is to select both Tukey’s and
Bonferroni in order to garner a holistic picture of your data and make an informed judgment.

Bonferroni: A widely used and relatively conservative post hoc test that identifies the number and location(s) of differences between
groups.

The computations of both post hoc tests are complex, so we will not attempt them by hand and will instead
demonstrate their use in SPSS.

357

Learning Check 12.2

Would it be appropriate to compute omega squared and post hoc tests for the ANOVA in the example pertaining to juvenile defendants’
attorneys and sentences? Why or why not?

Research Example 12.3 Does Crime Vary Spatially and Temporally in Accordance With Routine Activities Theory?

Crime varies across space and time; in other words, there are places and times it is more (or less) likely to occur. Routine activities
theory has emerged as one of the most prominent explanations for this variation. Numerous studies have shown that the
characteristics of places can attract or prevent crime and that large-scale patterns of human behavior shape the way crime occurs. For
instance, a tavern in which negligent bartenders frequently sell patrons too much alcohol might generate alcohol-related fights, car
crashes, and so on. Likewise, when schools let out for summer break, cities experience a rise in the number of unsupervised juveniles,
many of whom get into mischief. Most of this research, however, has been conducted in Western nations. De Melo, Pereira,
Andresen, and Matias (2017) extended the study of spatial and temporal variation in crime rates to Campinas, Brazil, to find out if
crime appears to vary along these two dimensions. They broke crime down by type and ran ANOVAs to test for temporal variation
across different units of time (season, month, day of week, hour of day). The table displays the results for the ANOVAs that were
statistically significant. (Nonsignificant findings have been omitted.)

Source: Adapted from Table 1 in De Melo et al. (2017).

As the table shows, homicide rates vary somewhat across month. Post hoc tests showed that the summer months experienced spikes
in homicide, likely because people are outdoors more often when the weather is nice, which increases the risk for violent
victimization and interpersonal conflicts. None of the variation across month was statistically significant (which is why there are no
rows for this unit of time in the table). There was significant temporal variation across weeks and hours. Post hoc tests revealed
interesting findings across crime type. For example, homicides are more likely to occur on weekends (since people are out and about

358

more during weekends than during weekdays), while burglaries are more likely to happen on weekdays (since people are at work).
The variation across hours of the day was also significant for all crime types, but the pattern was different within each one. For
instance, crimes of violence were more common in late evenings and into the night, while burglary was most likely to occur during
the daytime hours.

359

SPSS

Let us revisit the question asked in Example 2 regarding whether handgun murder rates vary by region. To
run an ANOVA in SPSS, follow the steps depicted in Figure 12.5. Use the Analyze → Compare Means →
One-Way ANOVA sequence to bring up the dialog box on the left side in Figure 12.5 and then select the
variables you want to use. Move the IV to the Factor space and the DV to the Dependent List. Then click Post
Hoc and select the Bonferroni and Tukey tests. Click Continue and OK to produce the output shown in Figure
12.6.

The first box of the output shows the results of the hypothesis test. You can see the sums of squares, df , and
mean squares for within groups and between groups. There are also total sums of squares and total degrees of
freedom. The number in the F column is Fobt . Here, you can see that Fobt = 6.329. When we did the

calculations by hand, we got 6.23. Our hand calculations had some rounding error, but this did not affect the
final decision regarding the null because you can also see that the significance value (the p value) is .004,
which is less than .05, the value at which ⍺ was set. The null hypothesis is rejected in the SPSS context just
like it was in the hand calculations.

The next box in the output shows the Tukey and Bonferroni post hoc tests. The difference between these tests
is in the p values in the Sig. column. In the present case, those differences are immaterial because the results
are the same across both types of tests. Based on the asterisks that flag significant results and the fact that the
p values associated with the flagged numbers are less than .05, it is apparent that the South is the region that
stands out from the others. Its mean is significantly greater than all three of the other regions’ means. The
Northeast, West, and Midwest do not differ significantly from one another, as evidenced by the fact that all of
their p values are greater than .05.

Figure 12.5 Running an ANOVA in SPSS

In Figure 12.7, you can see that the Fobt SPSS produces (24.719) is nearly identical to the 12.91 we arrived at

by hand. Looking at Tukey’s and Bonferroni, it appears that the categories “relative” and
“friend/acquaintance” are the only ones that do not differ significantly from one another. In the full data set,
the mean age of victims shot by relatives is 21.73 and that for the ones shot by friends and acquaintances is
24.05. These means are not significantly different from each other, but they are both distinct from the means
for stranger-perpetrated shootings (mean age of 29.58) and intimate-partner shootings (39.12).

Figure 12.6 ANOVA Output

360

*The mean difference is significant at the 0.05 level.

We can also use SPSS and the full FISS to reproduce the analysis we did by hand using a sample of
cases. Figure 12.7 shows the ANOVA and post hoc tests.

Figure 12.7 ANOVA Output

361

*The mean difference is significant at the 0.05 level.

Chapter Summary

This chapter taught you what to do when you have a categorical IV with three or more classes and a continuous DV. A series of t
tests in such a situation is not viable because of the familywise error rate. In an analysis of variance, the researcher conducts multiple
between-group comparisons in a single analysis. The F statistic compares between-group variance to within-group variance to
determine whether between-group variance (a measure of true effect) substantially outweighs within-group variance (a measure of
error). If it does, the null is rejected; if it does not, the null is retained.

The ANOVA F , though, does not indicate the size of the effect, so this chapter introduced you to an MA that allows for a

determination of the strength of a relationship. This measure is omega squared (w2 ), and it is used only when the null has been
rejected—there is no sense in examining the strength of an IV–DV relationship that you just said does not exist! Omega squared is
interpreted as the proportion of the variability in the DV that is attributable to the IV. It can be multiplied by 100 to be interpreted
as a percentage.

The F statistic also does not offer information about the location or number of differences between groups. When the null is
retained, this is not a problem because a retained null means that there are no differences between groups; however, when the null is
rejected, it is desirable to gather more information about which group or groups differ from which others. This is the reason for the
existence of post hoc tests. This chapter covered Tukey’s HSD and Bonferroni, which are two of the most commonly used post hoc
tests in criminal justice and criminology research. Bonferroni is a conservative test, meaning that it is more difficult to reject the null

362

hypothesis of no difference between groups. It is a good idea to run both tests and, if they produce discrepant information, make a
reasoned judgment based on your knowledge of the subject matter and data. Together, MAs and post hoc tests can help you glean a
comprehensive and informative picture of the relationship between the independent and dependent variables.

Thinking Critically

1. What implications does the relationship between shooting victims’ ages and these victims’ relationships with their shooters
have for efforts to prevent firearm violence? For each of the four categories of victim–offender relationship, consider the
mean age of victims and devise a strategy that could be used to reach people of this age group and help them lower their risks
of firearm victimization.

2. A researcher is evaluating the effectiveness of a substance abuse treatment program for jail inmates. The researcher
categorizes inmates into three groups: those who completed the program, those who started it and dropped out, and those
who never participated at all. He follows up with all people in the sample six months after their release from jail and asks
them whether or not they have used drugs since being out. He codes drug use as 0 = no and 1 = yes. He plans to analyze the
data using an ANOVA. Is this the correct analytic approach? Explain your answer.

Review Problems

1. A researcher wants to know whether judges’ gender (measured as male; female) affects the severity of sentences they impose
on convicted defendants (measured as months of incarceration) . Answer the following questions:

1. What is the independent variable?
2. What is the level of measurement of the independent variable?
3. What is the dependent variable?
4. What is the level of measurement of the dependent variable?
5. What type of hypothesis test should the researcher use?

2. A researcher wants to know whether judges’ gender (measured as male; female) affects the types of sentences they impose on
convicted criminal defendants (measured as jail; prison; probation; fine; other) . Answer the following questions:

1. What is the independent variable?
2. What is the level of measurement of the independent variable?
3. What is the dependent variable?
4. What is the level of measurement of the dependent variable?
5. What type of hypothesis test should the researcher use?

3. A researcher wishes to find out whether arrest deters domestic violence offenders from committing future acts of violence
against intimate partners. The researcher measures arrest as arrest; mediation; separation; no action and recidivism as number of
arrests for domestic violence within the next 3 years. Answer the following questions:

1. What is the independent variable?
2. What is the level of measurement of the independent variable?
3. What is the dependent variable?
4. What is the level of measurement of the dependent variable?
5. What type of hypothesis test should the researcher use?

4. A researcher wishes to find out whether arrest deters domestic violence offenders from committing future acts of violence
against intimate partners. The researcher measures arrest as arrest; mediation; separation; no action and recidivism as whether
these offenders were arrested for domestic violence within the next 2 years (measured as arrested; not arrested) . Answer the
following questions:

1. What is the independent variable?
2. What is the level of measurement of the independent variable?
3. What is the dependent variable?
4. What is the level of measurement of the dependent variable?
5. What type of hypothesis test should the researcher use?

5. A researcher wants to know whether poverty affects crime. The researcher codes neighborhoods as being lower-class, middle-
class , or upper-class and obtains the crime rate for each area (measured as the number of index offenses per 10,000 residents).

363

1. What is the independent variable?
2. What is the level of measurement of the independent variable?
3. What is the dependent variable?
4. What is the level of measurement of the dependent variable?
5. What type of hypothesis test should the researcher use?

6. A researcher wants to know whether the prevalence of liquor-selling establishments (such as bars and convenience stores) in
neighborhoods affects crime in those areas. The researcher codes neighborhoods as having 0–1 , 2–3 , 4–5 , or 6 + liquor-
selling establishments. The researcher also obtains the crime rate for each area (measured as the number of index offenses per
10,000 residents). Answer the following questions:

1. What is the independent variable?
2. What is the level of measurement of the independent variable?
3. What is the dependent variable?
4. What is the level of measurement of the dependent variable?
5. What type of hypothesis test should the researcher use?

7. Explain within-groups variance and between-groups variance. What does each of these concepts represent or measure?
8. Explain the F statistic in conceptual terms. What does it measure? Under what circumstances will F be small? Large?
9. Explain why the F statistic can never be negative.

10. When the null hypothesis in an ANOVA test is rejected, why are MA and post hoc tests necessary?
11. The Omnibus Crime Control and Safe Streets Act of 1968 requires state and federal courts to report information on all

wiretaps sought by and authorized for law enforcement agencies (Duff, 2010). One question of interest to someone studying
wiretaps is whether wiretap use varies by crime type; that is, we might want to know whether law enforcement agents use
wiretaps with greater frequency in certain types of investigations than in other types. The following table contains data from
the U.S. courts website (www.uscourts.gov/Statistics.aspx) on the number of wiretaps sought by law enforcement agencies in
a sample of states. The wiretaps are broken down by offense type, meaning that each number in the table represents the
number of wiretap authorizations received by a particular state for a particular offense. Using an alpha level of .05, test the
null hypothesis of no difference between the group means against the alternative hypothesis that at least one group mean is
significantly different from at least one other. Use all five steps. If appropriate, compute and interpret omega squared.

364

http://www.uscourts.gov/Statistics.aspx

12. Some studies have found that people become more punitive as they age, such that older people, as a group, hold harsher
attitudes toward people who commit crimes. The General Social Survey (GSS) asks people for their opinions about courts’
handling of criminal defendants. This survey also records respondents’ ages. Use the data below and an alpha level of .05 to
test the null hypothesis of no difference between the group means against the alternative hypothesis that at least one group
mean is significantly different from at least one other. Use all five steps. If appropriate, compute and interpret omega
squared.

13. In the ongoing effort to reduce police injuries and fatalities resulting from assaults, one issue is the technology of violence
against officers or, in other words, the type of implements offenders use when attacking police. Like other social events,
weapon use might vary across regions. The UCRs collect information on weapons used in officer assaults. These data can be
used to find out whether the percentage of officer assaults committed with firearms varies by region. The following table
contains the data. Using an alpha level of .01, test the null of no difference between means against the alternative that at least
one region is significantly different from at least one other. Use all five steps. If appropriate, compute and interpret omega
squared.

365

14. An ongoing source of question and controversy in the criminal court system are the possible advantages that wealthier
defendants might have over poorer ones, largely as a result of the fact that the former can pay to hire their own attorneys,
whereas the latter must accept the services of court-appointed counsel. There is a common perception that privately retained
attorneys are more skilled and dedicated than their publicly appointed counterparts. Let us examine this issue using a sample
of property defendants from the JDCC data set. The IV is attorney type and the DV is days to pretrial release , which measures
the number of days between arrest and pretrial release for those rape defendants who were released pending trial. (Those who
did not make bail or were denied bail are not included.) Using an alpha level of .05, test the null of no difference between
means against the alternative that at least one region is significantly different from at least one other. Use all five steps. If
appropriate, compute and interpret omega squared.

15. In Research Example 12.1, we read about a study that examined whether Asian defendants were sentenced more leniently
than offenders of other races. Let us run a similar test using data from the JDCC. The following table contains a sample of
juveniles convicted of property offenses and sentenced to probation. The IV is race , and the DV is each person’s probation
sentence in months. Using an alpha level of .01, test the null of no difference between means against the alternative that at least
one region is significantly different from at least one other. Use all five steps. If appropriate, compute and interpret omega
squared.

366

16. Across police agencies of different types, is there significant variation in the prevalence of bachelor’s degrees among sworn
personnel? The table contains Law Enforcement Management and Administrative Statistics (LEMAS) data showing a
sample of agencies broken down by type. The numbers represent the percentage of sworn personnel that has a bachelor’s
degree or higher. Using an alpha level of .01, test the null of no difference between means against the alternative that at least
one facility type is significantly different from at least one other. Use all five steps. If appropriate, compute and interpret
omega squared.

17. Let’s continue using the LEMAS survey and exploring differences across agencies of varying types. Problem-oriented
policing has been an important innovation in the police approach to reducing disorder and crime. This approach encourages
officers to investigate ongoing problems, identify their source, and craft creative solutions. The LEMAS survey asks agency
top managers whether they encourage patrol officers to engage in problem solving and, if they do, what percentage of their
patrol officers are encouraged to do this type of activity. Using an alpha level of .05, test the null of no difference between
means against the alternative that at least one agency type is significantly different from at least one other. Use all five steps.
If appropriate, compute and interpret omega squared.

367

18. Do the number of contacts people have with police officers vary by race? The Police–Public Contact Survey (PPCS) asks
respondents to report their race and the total number of face-to-face contacts they have had with officers in the past year.
The following table shows the data. Using an alpha level of .05, test the null of no difference between means against the
alternative that at least one facility type is significantly different from at least one other. Use all five steps. If appropriate,
compute and interpret omega squared.

19. Are there race differences among juvenile defendants with respect to the length of time it takes them to acquire pretrial
release? The data set JDCC for Chapter 12.sav (www.sagepub.com/gau) can be used to test for whether time-to-release varies
by race for juveniles accused of property crimes. The variables are race and days. Using SPSS, run an ANOVA with race as
the IV and days as the DV. Select the appropriate post hoc tests.

1. Identify the obtained value of F.
2. Would you reject the null at an alpha of .01? Why or why not?
3. State your substantive conclusion about whether there is a relationship between race and days to release for juvenile

property defendants.
4. If appropriate, interpret the post hoc tests to identify the location and total number of significant differences.
5. If appropriate, compute and interpret omega squared.

20. Are juvenile property offenders sentenced differently depending on the file mechanism used to waive them to adult court?
The data set JDCC for Chapter 12.sav (www.sagepub.com/gau) contains the variables file and jail , which measure the
mechanism used to transfer each juvenile to adult court (discretionary, direct file, or statutory) and the number of months in
the sentences of those sent to jail on conviction. Using SPSS, run an ANOVA with file as the IV and jail as the DV. Select
the appropriate post hoc tests.

1. Identify the obtained value of F.
2. Would you reject the null at an alpha of .05? Why or why not?
3. State your substantive conclusion about whether there is a relationship between attorney type and days to release for

juvenile defendants.
4. If appropriate, interpret the post hoc tests to identify the location and total number of significant differences.
5. If appropriate, compute and interpret omega squared.

21. The data set FISS for Chapter 12.sav (www.sagepub.com/gau) contains the FISS variables capturing shooters’ intentions

368

http://www.sagepub.com/gau

http://www.sagepub.com/gau

http://www.sagepub.com/gau

(accident, assault, and police involved) and victims’ ages. Using SPSS, run an ANOVA with intent as the IV and age as the
DV. Select the appropriate post hoc tests.

1. Identify the obtained value of F.
2. Would you reject the null at an alpha of .05? Why or why not?
3. State your substantive conclusion about whether victim age appears to be related to shooters’ intentions.
4. If appropriate, interpret the post hoc tests to identify the location and total number of significant differences.
5. If appropriate, compute and interpret omega squared.

369

Key Terms

Analysis of variance (ANOVA) 281
Familywise error 281
Between-group variance 282
Within-group variance 282
F statistic 282
F distribution 282
Post hoc tests 286
Omega squared 297
Tukey’s honest significant difference (HSD) 298
Bonferroni 298

Glossary of Symbols and Abbreviations Introduced in This Chapter

370

Chapter 13 Hypothesis Testing With Two Continuous Variables
Correlation

371

Learning Objectives
Identify situations in which, based on the levels of measurement of the independent and dependent variables, correlation is
appropriate.
Define positive and negative correlations.
Use graphs or hypotheses to determine whether a bivariate relationship is positive or negative.
Explain the difference between linear and nonlinear relationships.
Explain the r statistic conceptually.
Explain what the null and alternative hypotheses predict about the population correlation.
Use raw data to solve equations and conduct five-step hypothesis tests.
Explain the sign, magnitude, and coefficient of determination and use them in the correct situations.
Use SPSS to run correlation analyses and interpret the output.

Thus far, we have learned the hypothesis tests used when the two variables under examination are both
categorical (chi-square) when the independent variable (IV) is categorical and the dependent variable (DV) is
a proportion (two-population z test for proportions), when the IV is a two-class categorical measure and the
DV is continuous (t tests), and when the IV is categorical with three or more classes and the DV is continuous
(analysis of variance, or ANOVA). In the current chapter, we will address the technique that is proper when
both of the variables are continuous. This technique is Pearson’s correlation (sometimes also called Pearson’s
r) , because it was developed by Karl Pearson, who was instrumental in advancing the field of statistics.

Pearson’s correlation: The bivariate statistical analysis used when both independent and dependent variables are continuous.

The question asked in a correlation analysis is, “When the IV increases by one unit, what happens to the
DV?” The DV might increase (a positive correlation), it might decrease (a negative correlation), or it might
do nothing at all (no correlation). Figure 13.1 depicts these possibilities.

Positive correlation: When a one-unit increase in the independent variable is associated with an increase in the dependent variable.

Negative correlation: When a one-unit increase in the independent variable is associated with a decrease in the dependent variable.

A positive correlation might be found between variables such as drug use and violence in neighborhoods:
Since drug markets often fuel violence, it would be expected that neighborhoods with high levels of drug
activity would be more likely to also display elevated rates of violent crime (i.e., as drug activity increases, so
does violence). A negative correlation would be anticipated between the amount of collective efficacy in an
area and the crime rate. Researchers have found that neighborhoods where residents know one another and
are willing to take action to protect their areas from disorderly conditions have lower crime rates. Higher rates
of collective efficacy should correspond to lower crime rates because of collective efficacy’s protective capacity.

The bivariate associations represented by correlations are linear relationships. This means that the amount of
change in the DV that is associated with an increase in the IV remains constant across all levels of the IV and
is always in the same direction (positive or negative). Linear relationships can be contrasted to nonlinear or
curvilinear relationships such as those displayed in Figure 13.2. You can see in this figure how an increase in

372

the IV is associated with varying changes in the DV. Sometimes the DV increases, sometimes it decreases,
and sometimes it does nothing at all. These nonlinear relationships cannot be modeled using correlational
analyses.

Linear relationship: A relationship wherein the change in the dependent variable associated with a one-unit increase in the
independent variable remains static or constant at all levels of the independent variable.

Figure 13.1 Three Types of Correlations Between a Continuous IV and a Continuous DV

The statistic representing correlations is called the r coefficient. This coefficient ranges from −1.00 to +1.00.
The population correlation coefficient is ρ , which is the Greek letter rho (pronounced “row”). A correlation of
±1.00 signals a perfect relationship where a one-unit increase in x (the IV) is always associated with exactly the
same unit increase of x , across all values of both variables. Correlations of zero indicate that there is no
relationship between the two variables. Coefficients less than zero signify negative relationships, whereas
coefficients greater than zero represent positive relationships. Figure 13.3 depicts the sampling distribution for
r.

r coefficient: The test statistic in a correlation analysis.

Figures 13.4, 13.5, and 13.6 exemplify perfect, strong, and weak positive relationships, respectively. These
scatterplots all show that as y increases, so does x , but you can see how the association breaks down from one
figure to the next. Each scatterplot contains what is called a line of best fit—this is the line that minimizes the
distance between itself and each value in the data. In other words, no line would come closer to all the data
points than this one. The more tightly the data points cluster around the line, the better the line represents
the data, and the stronger the r coefficient will be. When the data points are scattered, there is a lot of error
(i.e., distance between the line and the data points), and the r coefficient will be smaller.

Figure 13.2 Examples of Nonlinear Relationships

Figure 13.3 The Sampling Distribution of Correlation Coefficients

373

Figure 13.4 A Perfect Linear, Positive Relationship Between x and y

There are no strict rules regarding what constitutes a “strong” or “weak” value of r. Researchers use general
guidelines to assess magnitudes. In criminal justice and criminology research, values between 0 and ±.29 are
generally considered weak, from about ±.30 to ±.49 are moderate, ±.50 to ±.69 are strong, and anything
beyond ±.70 is very strong. We will use these general guidelines throughout the chapter when assessing the
magnitude of the relationship suggested by a certain r value.

Figure 13.5 A Strong Linear, Positive Relationship Between x and y

Figure 13.6 A Weak Linear, Positive Relationship Between x and y

374

As always, it must be remembered that correlation is not causation. A statistically significant correlation
between two variables means that there is an empirical association between them (one of the criteria necessary
for proving causation, as discussed in Chapter 2), but by itself this is not evidence that the IV causes the DV.
There could be another variable that accounts for the DV better than the IV does but that has been omitted
from the analysis. It could also be the case that both the IV and the DV are caused by a third, omitted
variable. For instance, crime frequently increases during the summer months, meaning that in any given city,
ambient temperature might correlate positively with crime rates. Does this mean heat causes crime? It is
possible that hot temperatures make people extra cranky, but a more likely explanation is that crime increases
in the summer because people are outdoors and there are more adolescents engaged in unsupervised,
unstructured activities. This illustrates the need to think critically about statistically significant relationships
between IVs and DVs. Proceed with caution when interpreting correlation coefficients, and keep in mind that
there might be more going on than what is captured by these two variables.

Research Example 13.1 Part 1: Is Perceived Risk of Internet Fraud Victimization Related to Online Purchases?

Many researchers have addressed the issue of perceived risk with regard to people’s behavioral adaptations. Perceived risk has
important consequences at both the individual level and the community level, because people who believe their likelihood of
victimization to be high are less likely to connect with their neighbors, less likely to use public spaces in their communities, and more
likely to stay indoors to. What has not been addressed with much vigor in the criminal justice and criminology literature is the issue
of perceived risk of Internet theft victimization. Given how integral the Internet is to American life and the enormous volume of
commerce that takes place online every year, it is important to study the online shopping environment as an arena ripe for theft and
fraud.

Reisig, Pratt, and Holtfreter (2009) examined this issue in the context of Internet theft victimization. They used self-report data
from a survey administered to a random sample of citizens. Their research question was whether perceived risk of Internet theft
victimization would dampen people’s tendency to shop online because of the vulnerability created when credit cards are used to make
Internet purchases. They also examined whether people’s financial impulsivity (the tendency to spend money rather than save it and
to possibly spend more than one’s income provides for) affected perceived risk. The researchers ran a correlation analysis. Was their
hypothesis supported? We will revisit this study later in the chapter to find out.

Correlation analyses employ the t distribution because this probability distribution adequately mirrors the
sampling distribution of r at small and large sample sizes (see Figure 13.3). The method for conducting a
correlation analysis is to first calculate r and then test for the statistical significance of r by comparing tcrit and

tobt. Keep this two-step procedure in mind so that you understand the analytic technique in Step 4.

375

For our first example, we will use the Police–Public Contact Survey (PPCS; see Data Sources 2.1) to find out
if there is a relationship between respondents’ ages and the number of face-to-face contacts they have had
with police officers in the past year. Table 13.1 shows the data for a random sample of seven prisons. We will
set alpha at .05.

Step 1. State the null (H0) and alternative (H1) hypotheses.

In a correlation analysis, the null hypothesis is that there is no correlation between the two variables. The null
is phrased in terms of ρ , the population correlation coefficient. This is the Greek letter rho (pronounced
“roe”). Recall that a correlation coefficient of zero signifies an absence of a relationship between two variables;
therefore, the null is

H0: ρ = 0

Three options are available for the phrasing of the alternative hypothesis. Since correlations use the t
distribution, these three options are the same as those in t tests. There is a two-tailed option (written H1: ρ ≠

0) that predicts a correlation of unspecified direction. This is the option used when a researcher does not wish
to make an a priori prediction about whether the correlation is positive or negative. There are also two one-
tailed options. The first predicts that the correlation is negative (H1: ρ < 0) and the second predicts that it is positive (H1: ρ > 0).

In the present example, we have no a priori reason for predicting a direction. Past research suggests that young
people have more involuntary contacts (such as traffic stops) whereas older people have more voluntary ones
(such as calling the police for help). Since the contact variable used here includes all types of experiences, a
prediction cannot be made about how age might correlate with contacts; therefore, we will use a two-tailed
test. The alternative hypothesis is

H1: ρ ≠ 0

376

Step 2. Identify the distribution and compute the degrees of freedom.

The t distribution is the probability curve used in correlation analyses. Recall that this curve is symmetric and,

unlike the χ2 and F distributions, has both a positive side and a negative side. The degrees of freedom (df) in
correlation are computed as

In the present example, there are five people in the sample, so

df = 7 − 2 = 5

Step 3. Identify the critical value, and state the decision rule.

With a two-tailed test, ⍺ = .05, and df = 5, the value of tcrit is 2.571. Since this is a two-tailed test, there are

two critical values because half of ⍺ is in each tail. This means tcrit = ±2.571. The decision rule is that if tobt is

383

Step 2. Identify the distribution and compute the degrees of freedom.

The distribution is t , and the df are computed using Formula 13(1):

df = 8 − 2 = 6

Step 3. Identify the critical value and state the decision rule.

For a one-tailed test, an alpha of .05, and df = 6, the critical value of t is 1.943. The decision rule is that if tobt

is greater than 1.943 , H0 will be rejected.

Step 4. Compute the obtained value of the test statistic.

Using Formula 13(2) and the sums from Table 13.4,

= .74

384

The r value is large, indicating a strong positive relationship between the variables. We still need to carry out
the calculations for tobt, though, because we have not yet ruled out the possibility that this r is a fluke finding.

The t test will tell us that

= .74 (3.65)

= 2.70

Step 5. Make a decision about the null and state the substantive conclusion.

The decision rule stated that the null would be rejected if tobt ended up being greater than 1.943. The value of

tobt greatly exceeds 1.943, so we will reject the null. Among juveniles tried in adult courts and sentenced to

fines, there is a statistically significant correlation between the number of criminal charges filed against a
person and the dollar amount of the fine imposed.

385

Beyond Statistical Significance: Sign, Magnitude, and Coefficient of
Determination

When the null hypothesis is rejected in a correlation hypothesis test, the correlation coefficient r can be
examined with respect to its substantive meaning. We have touched on the topic of magnitude versus
statistical significance already; note that in all three examples of correlation tests, we made a preliminary
assessment of the magnitude of r before moving on to the tobt calculations. In each one, though, we had to do

that last step to check for statistical significance before formally interpreting the strength or weakness of r.
The reverse of this is also true: Statistical significance is not proof that the variables are strongly correlated.
The null can be rejected even when a correlation is of little practical importance. The biggest culprit of
misleading significance is sample size: Correlations that are substantively weak can result in rejected nulls
simply because the sample is large enough to drive up the value of tobt. When the null is rejected, criminal

justice and criminology researchers turn to three interpretive measures to assess the substantive importance of
a statistically significant r : sign , magnitude , and coefficient of determination.

The sign of the correlation coefficient indicates whether the correlation between the IV and the DV is
negative or positive. Take another look at Figure 13.1 to refresh your memory as to what negative and positive
correlations look like. A positive correlation means that a unit increase in the IV is associated with an increase
in the DVs, and a negative correlation indicates that as the IV increases, the DV declines.

The magnitude is an evaluation of the strength of the relationship based on the value of r. As noted in the
outset of this chapter, there are no rules set in stone for determining whether a given r value is strong,
moderate, or weak in magnitude; this judgment is based on a researcher’s knowledge of the subject matter. As
described previously, a general guideline is that values between 0 and ±.29 are weak, from about ±.30 to ±.49
are moderate, ±.50 to ±.69 are strong, and those beyond ±.70 are very strong.

Third, the coefficient of determination is calculated as the obtained value of r , squared (i.e., r2). The result is
a proportion that can be converted to a percentage and interpreted as the percentage of the variance in the DV
that is attributable to the IV. As a percentage, the coefficient ranges from 0 to 100, with higher numbers
signifying stronger relationships and numbers closer to zero representing weaker associations.

Let us interpret the sign, magnitude, and coefficient of determination for the correlation coefficient computed
in the third example. Since we retained the null in the first and second examples, we cannot apply the three
interpretive measures to these r values. In the examination of charges and fines, we found that r = .74.

First, the sign of the correlation coefficient is positive, meaning that a greater number of charges is associated
with a higher fine amount. Second, the magnitude is quite strong, as .74 exceeds the .70 threshold. Third, the

coefficient of determination is r2 =.742 = .55. This means that 55% of the variance in the DV (fine amount)
can be explained by the IV (number of charges). This is a decent amount of shared variance! Of course, we
cannot draw causal conclusions—there are numerous factors that enter into judges’ sentencing decisions.
Anytime you interpret the outcome of a correlation hypothesis test, keep in mind that statistical significance is

386

not, by itself, enough to demonstrate a practically significant or substantively meaningful relationship between
two variables; moreover, even a strong association does not mean that one variable truly causes the other.

387

SPSS

Correlations are run in SPSS using the Analyze → Correlation → Bivariate sequence. Once the dialog box
shown in Figure 13.7 appears, select the variables of interest and move them into the analysis box as shown. In
this example, the JDCC data on charges and fines is used. You can see in Figure 13.7 that both of these
variables have been moved from the list on the left into the box on the right. After selecting your variables,
click OK. Figure 13.8 shows the output.

The output in Figure 13.8 is called a correlation matrix, meaning that it is split by what is called a diagonal
(here, the cells in the upper-left and lower-right corners of the matrix) and is symmetric on both of the off-
diagonal sides. The numbers in the diagonal are always 1.00 because they represent each variable’s correlation
with itself. The numbers in the off-diagonals are the ones to look at. The number associated with Pearson’s
correlation is the value of r. In Figure 13.8, you can see that r = .239, which is much smaller than what we
arrived at by hand, but since the SPSS example is employing the entire data set, variation is expected.

The Sig. value is, as always, the obtained significance level or p value. This number is compared to alpha to
determine whether the null will be rejected. If p < ⍺, the null is rejected; if p > ⍺, the null is retained.
Typically, any p value less than .05 is considered statistically significant. In Figure 13.8, the p value is .000,
which is very small and indicates that we would reject the null even if we set ⍺ at .001, a very stringent test for
statistical significance.

Figure 13.7 Running a Correlation Analysis in SPSS

Correlation matrices can be expanded to include multiple variables. When you do this, SPSS runs a separate
analysis for each pair. Let us add juvenile defendants’ ages to the matrix. Figure 13.9 shows the output
containing all three variables.

Figure 13.8 SPSS Output

388

** Correlation is significant at the 0.01 level (2-tailed).

389

Learning Check 13.2

In a correlation matrix, the numbers in the diagonal will all be 1.00 because the diagonal represents each variable’s correlation with itself.
Why is this? Explain why any given variable’s correlation with itself is always 1.00.

Research Example 13.1, Continued

390

Part 2: Is Perceived Risk of Internet Fraud Victimization Related to
Online Purchases?
Recall that Reisig et al. (2009) predicted that people’s perceived likelihood of falling victim to Internet theft would lead to less-
frequent engagement in the risky practice of purchasing items online using credit cards. They also thought that financial impulsivity,
as an indicator of low self-control, would affect people’s perceptions of risk. The following table is an adaptation of the correlation
matrix obtained by these researchers.

Were the researchers’ hypotheses correct? The results were mixed. On the one hand, they were correct in that the correlations
between the IVs and the DV were statistically significant. You can see the significance of these relationships indicated by the
asterisks that flag both of these correlations as being statistically significant at an alpha level of .05. Since the null was rejected, it is
appropriate to interpret the coefficients. Regarding sign, perceived risk was negatively related to online purchases (i.e., greater
perceived risk meant less online purchasing activity), and financial impulsivity was positively related to perceived risk (i.e., financially
impulsive people were likely to see themselves as facing an elevated risk of victimization).

The reason the results were mixed, however, is that the correlations—though statistically significant—were not strong. Using the
magnitude guidelines provided in this chapter, it can be seen that −.12 and .11 are very weak. The coefficient of determination for

each one is (−.12)2 = .01 and .112 = .01, so only 1% of the variance in online purchases was attributable to perceived risk, and only
1% was due to financial impulsivity, respectively. This illustrates the potential discrepancy between statistical significance and
substantive importance: Both of these correlations were statistically significant, but neither meant much in terms of substantive or
practical implications. As always, though, it must be remembered that these analyses were bivariate and that the addition of more
IVs might alter the IV–DV relationships observed here.

Source: Adapted from Table 1 in Reisig et al. (2009).

Step 4. Compute the obtained value of the test statistic.

The first portion of this step entails calculating b and a in order to construct the regression equation in
Formula 14(1). We have already done this; recall that the regression equation is ŷ = 126.58 + 134.88x. Just
like all other statistics, b has a sampling distribution. See Figure 14.2. The distribution centers on zero because
the null predicts that the variables are not related. We need to find out whether b is either large enough or
small enough to lead us to believe that B is actually greater than or less than zero, respectively.

Finding out whether b is statistically significant is a two-step process. First, we compute this coefficient’s
standard error, symbolized SEb. The standard error is the standard deviation of the sampling distribution

depicted in Figure 14.2. The standard error is important because, all else being equal, slope coefficients with
larger standard errors are less trustworthy than those with smaller standard errors. A large standard error
means that there is substantial uncertainty as to the accuracy of the sample slope coefficient b as an estimate of
the population slope B.

Figure 14.2 The Sampling Distribution of Slope Coefficients

The standard error of the sampling distribution for a given regression coefficient (SEb) is computed as

409

where

sy = the standard deviation of y

sx = the standard deviation of x

r = the correlation between x and y

Recall that standard deviations are the mean deviation scores for a particular variable. The standard error of
the sampling distribution for a given slope coefficient is a function of the standard deviation of the DV, the
standard deviation of the IV in question, and the correlation between the two. This provides a measure of the
strength of the association between the two variables that simultaneously accounts for the amount of variance
in each. All else being equal, more variance (i.e., larger standard deviations) suggests less confidence in an
estimate. Large standard deviations produce a larger standard error, which in turn reduces the chances that a
slope coefficient will be found to be statistically significant.

Here, the standard deviation of x (the IV) is 1.75 and the standard deviation of y (the DV) is 317.89. The
correlation between these two variables is .74. Plugging these numbers into Formula 14(5) and solving
produces

= 181.65(.28)

= 50.86

This is the standard error of the slope coefficient’s sampling distribution. Now SEb can be entered into the tobt

formula, which is

The obtained value of t is the ratio between the slope coefficient and its standard error. Entering our numbers
into the equation results in

Step 4 is complete! The obtained value of t is 2.65. We can now make a decision about the statistical

410

significance of b.

Step 5. Make a decision about the null and state the substantive conclusion.

The decision rule stated that the null would be rejected if tobt turned out to be either <–2.447 or >2.447. Since

2.65 is greater than 2.447, the null is rejected. The slope is statistically significant at an alpha of .05. There is a
positive relationship between the extent to which a prison is over capacity and the number of major
disturbances that occur in that institution. In other words, knowing how many inmates a prison has beyond its
rated capacity helps predict the number of major disturbances that a prison will experience in a year.

As with correlation, rejecting the null requires further examination of the IV–DV relationship to determine
the strength and quality of that connection. In the context of regression, a rejected null indicates that the IV
exerts some level of predictive power over the DV; however, it is desirable to know the magnitude of this
predictive capability. The following section describes two techniques for making this assessment.

411

Beyond Statistical Significance: How Well Does the Independent Variable
Perform as a Predictor of the Dependent Variable?

There are two ways to assess model quality. The first is to create a standardized slope coefficient or beta
weight (symbolized β , the Greek letter beta) so the slope coefficient’s magnitude can be gauged. The second
is to examine the coefficient of determination. Each will be discussed in turn. Remember that these
techniques should be used only when the null hypothesis has been rejected: If the null is retained, the analysis
stops because the conclusion is that there is no relationship between the IVs and the DVs.

Beta weight: A standardized slope coefficient that ranges from –1.00 to +1.00 and can be interpreted similarly to a correlation so that
the magnitude of an IV–DV relationship can be assessed.

412

Standardized Slope Coefficients: Beta Weights

As noted earlier, the slope coefficient b is unstandardized, which means that it is specific to the units in which
the IV and DV are measured. There is no way to “eyeball” an unstandardized slope coefficient and assess its
strength because there are no boundaries or benchmarks that can be used with unstandardized statistics—they
are specific to whatever metric the DV is measured in. The way to solve this is to standardize b. Beta weights
range between 0.00 and ±1.00 and, like correlation coefficients, rely more on guidelines than rules for
interpretation of their strength. Generally speaking, betas between 0 and ±.19 are considered weak, from
about ±.20 to ±.29 are moderate, ±.30 to ±.39 are strong, and anything beyond ±.40 is very strong. These
ranges can vary by topic, though; subject-matter experts must decide whether a beta weight is weak or strong
within the customs of their fields of study.

Standardization is accomplished as follows:

We saw in the calculation of SEb that the The standard deviation of x is 1.75 and the standard deviation of y

is 317.89. We already computed b and know that it is 134.88. Plugging these numbers into Formula 14(7), we
get

In this calculation, rounding would have thrown the final answer off the mark, so the division and
multiplication were completed in a single step. The beta weight is .74. If this number seems familiar, it is! The
correlation between these two variables is .74. Beta weights will equal correlations (within rounding error) in
the bivariate context and can be interpreted the same way. A beta of .74 is very strong.

413

Learning Check 14.4

You just learned that standardized beta weights are equal to regression coefficients in bivariate regression models. As we will see soon,
however, this does not hold true when there is more than one IV. Why do you think this is? If you are not sure of the answer now,
continue reading and then come back to this question.

414

The Quality of Prediction: The Coefficient of Determination

Beta weights help assess the magnitude of the relationship between an IV and a DV, but they do not provide
information about how well the IV performs at predicting the DV. This is a substantial limitation because
prediction is the heart of regression—it is the reason researchers use this technique. The coefficient of
determination addresses the issue of the quality of prediction. It does this by comparing the actual, empirical
values of y to the predicted values (ŷi ). A close match between these two sets of scores indicates that x does a

good job predicting y , whereas a poor correspondence signals that x is not a useful predictor. The coefficient
of determination is given by

where = the correlation between the actual and predicted values of y.

The correlation between the y and ŷ and values is computed the same way that correlations between IVs and
DVs are and so will not be shown here. In real life, SPSS generates this value for you. The correlation in this
example is .74. This makes the coefficient of determination

.742 = .55

This means that 55% of the variance in y can be attributed to the influence of x. In the context of the present
example, 55% of the variance in fine amounts is attributable to the number of charges. Again, this value looks
familiar—it is the same as the coefficient of determination in Chapter 13! This illustrates the close connection
between correlation and regression at the bivariate level. Things get more complicated when additional IVs
are added to the model, as we will see next.

415

Adding More Independent Variables: Multiple Regression

The problem with bivariate regression—indeed, with all bivariate hypothesis tests—is that social phenomena
are usually the product of many factors, not just one. There is not just one single reason why a person commits
a crime, a police officer uses excessive force, or a prison experiences a riot or other major disturbance. Bivariate
analyses risk overlooking variables that might be important predictors of the DV. For instance, in the bivariate
context, we could test for whether having a parent incarcerated increases an individual’s propensity for crime
commission. This is probably a significant factor, but it is certainly not the only one. We can add other
factors, such as having experienced violence as a child, suffering from a substance-abuse disorder, and being
unemployed, too. Each of these IVs might help improve our ability to understand (i.e., predict) a person’s
involvement in crime. The use of only one IV virtually guarantees that important predictors have been
erroneously excluded and that the results of the analysis are therefore suspect, and it prevents us from
conducting comprehensive, in-depth examinations of social phenomena.

Multiple regression is the answer to this problem. Multiple regression is an extension of bivariate regression
and takes the form

Revisit Formula 14(1) and compare it to Formula 14(9) to see how 14(9) expands on the original equation by
including multiple IVs instead of just one. The subscripts show that each IV has its own slope coefficient.
With k IVs in a given study, ŷ is the sum of each bkxk term and the intercept.

In multiple regression, the relationship between each IV and the DV is assessed while controlling for the
effect of the other IV or IVs. The slope coefficients in multiple regression are called partial slope coefficients
because, for each one, the relationship between the other IVs and the DV has been removed so that each
partial slope represents the “pure” relationship between an IV and the DV. Each partial slope coefficient is
calculated while holding all other variables in the model at their means, so researchers can see how the DV
would change with a one-unit increase in the IV of interest, while holding all other variables constant. The
ability to incorporate multiple predictors and to assess each one’s unique contribution to is what makes
multiple regression so useful.

Partial slope coefficient: A slope coefficient that measures the individual impact of an independent variable on a dependent variable
while holding other independent variables constant.

It is interesting and surprising that intelligence outweighed parenting in predicting children’s self-control.
Intelligence was, in fact, by far the strongest predictor of low self-control: More-intelligent children had more
self-control relative to their peers who scored lower on intelligence tests. Paternal low self-control significantly
predicted children’s low self-control, but the beta was very small. The only other significant variable is sex,

with boys displaying higher levels of low self-control compared to girls. The model R2 = .225, meaning that
the entire set of predictors explained 22.5% of the variance in children’s self-control. Clearly, childhood
intelligence is integral in the development of self-control and, ultimately, in the prevention of delinquency and

416

crime.

Research Example 14.1 Does Childhood Intelligence Predict the Emergence of Self-Control?

Theory suggests—and research has confirmed—that low self-control is significantly related to delinquency and crime. People with
low self-control tend to be impulsive and to have trouble delaying gratification and considering possible long-term consequences of
their behavior. Self-control is said to be learned during the formative years of a child’s life. Parenting is critical to the development of
self-control; parents who provide clear rules and consistent, fair punishment help instill self-discipline in their children. But what
about children’s innate characteristics, such as their intelligence level? Petkovsek and Boutwell (2014) set out to test whether
children’s intelligence significantly affected their development of self-control, net of parenting, and other environmental factors.
They ran OLS regression models and found the following results (note that SE = standard error).

Source: Adapted from Table 2 in Petkovsek and Boutwell (2014).

p < .01; p < .001. Before getting into more-complex examples, let us work briefly with a hypothetical regression equation containing two IVs, x1 and x2. Suppose the line is ŷ = 1.00 + .80x1 + 1.50x2 We can substitute various values for x1 and x2 to find . Let’s find the predicted value of the DV when x1 = 4 and x2 = 2: ŷ =1.00 + .80(4) +1.50(2) =1.00 + 3.20 + 3.00 = 7.20. There it is! If x1 = 4 and x2 = 2, the DV is predicted to be 7.20. 417 418 Learning Check 14.5 Use the equation ŷ = 1.00 + .80x1 + 1.50x2 to find the predicted value of y when . .. 1. x1 = 2 and x2 = 3. 2. x1 = 1.50 and x2 = 3. 3. x1 = .86 and x2 = –.67. 4. x1 = 12 and x2 = 20. The formulas involved in multiple regression are complex and are rarely used in the typical criminal justice and criminology research setting because of the prevalence of statistical software. We now turn to a discussion of the use of SPSS to obtain and interpret OLS regression output. 419 Ordinary Least Squares Regression in SPSS As described earlier, researchers rarely fit regression models by hand. Data sets are typically far too large for this, and the prevalence of user-friendly software programs like SPSS put impressive computing power right at researchers’ fingertips. Of course, the flipside of this wide availability of user-friendly interfaces is the potential for them to be used carelessly or incorrectly. People producing research must possess a solid comprehension of the theory and math underlying statistical techniques before they attempt to run analyses in SPSS or other programs. Consumers of research (e.g., police and corrections officials) need to have enough knowledge about statistics to be able to evaluate results, including spotting mistakes when they occur. Consumers can be led astray if they fail to critically examine statistics and if they do not know when to trust empirical findings and when not to. As with the techniques discussed in previous chapters, GIGO applies to regression modeling. Statistical programs will frequently run and produce results even when errors have been made. For instance, SPSS will run an OLS model when the dependent variable is nominal. The results of this test are meaningless and useless, so it is up to producers and consumers to be smart and avoid making these mistakes and being deceived by them if they do occur. Before discussing the analytical element of running OLS models in SPSS, we should revisit the null and alternative hypotheses. In multiple regression, the null and alternative each apply to every IV. For each IV, the null predicts that the population slope coefficient Bk is zero, and the alternative predicts that it is significantly different from zero. Since the analysis in the current example has two IVs, the null and alternative are H0: B1 = 0 and B2 = 0 H1: B1 and/or B2 ≠ 0 Since each IV has its own null, it is possible for the null to be rejected for one of the variables and not for the other. To run a regression analysis in SPSS, go to Analyze → Regression → Linear. This will produce the dialog box shown in Figure 14.3. Here, the Police–Public Contact Survey (PPCS; see Data Sources 2.1) is being used. The DV is the length of time a vehicle or pedestrian stop lasted. The IVs are characteristics of the respondents. Respondents’ sex, age, and race are included. Age is a continuous variable (measured in years), and sex and race are nominal-level variables each coded as a dummy variable such that one category is 0 and the other is 1. In this example, 1 = male and 0 = female for the gender variable, and 1 = white and 0 = nonwhite for the race variable. Move the DV and IVs into their proper locations in the right-hand spaces, and then press OK. This will produce an output window containing the elements displayed in the following figures. Dummy variable: A two-category, nominal variable with one class coded as 0 and the other coded as 1. The first portion of regression output you should look at is the analysis of variance (ANOVA) box. This 420 might sound odd since we are running a multiple regression analysis, not an ANOVA, but what this box tells you is whether the set of IVs included in the model explains a statistically significant amount of the variance in the DV. If F is not significant (meaning if p > .05), then the model is no good. In the event of a
nonsignificant F , you should not go on to interpret and assess the remainder of the model. Your analysis is
over at that point and what you must do is revisit your data, your hypothesis, or both to find out what went
wrong. The problem might be conceptual rather than statistical—the IVs you predicted would impact the DV
might not actually do so. There could be an error in your choice of variables to represent theoretical concepts,
or there could be a deeper flaw affecting the theory itself. Before you consider possible conceptual issues,
check the data to make sure the problem is not caused by a simple coding error.

Figure 14.3 Running a Multiple Regression Analysis in SPSS

Figure 14.4 SPSS Regression Output

421

In Figure 14.4, you can see that F = 11.654 and p = .000, so the amount of the variance in the DV variance
that is explained by the IVs is significantly greater than zero. Note that a significant F is not by itself proof
that the model is good—this is a necessary but insufficient condition for a high-quality regression model.

Second, look at the “R Square column” in the “Model Summary” box. This is the multiple coefficient of
determination and indicates the proportion of the variance in the DV that is explained by all the IVs
combined. It is an indication of the overall explanatory power of the model. There are no specific rules for
evaluating R square. Generally, values up to .20 are considered fairly low, .21 to .30 are moderate, .31 to .40
are good, and anything beyond .41 is very good. Here, R square is .008, meaning the IVs that we selected
explain a trivial .8% of the variance in stop length. There are definitely important variables omitted from this
model.

Third, go to the “Coefficients” box at the bottom of the output to see the unstandardized b values,
standardized beta weights, and significance test results. The “Unstandardized Coefficients: B” column
contains the slope for each variable (the constant is the intercept). The IVs age and sex are statistically
significant, but race is not. We know this because the “Sig.” values (i.e., p values) for sex and age are both less
than .05, but the p value for race is much greater than .05.

Since age and sex are statistically significant, we can interpret their meaning to the model. The
unstandardized slope for age is b = –.045. This means that each one-year increase in a person’s age is
associated with a reduction of .045 minutes in the total length of the stop. Older people’s stops are shorter, on
average, than younger people’s. This makes sense, because younger people are more active in deviant behavior
than older people are. Police officers probably take more time with younger drivers and pedestrians they stop
to make sure they are not engaged in illegal activity.

Since sex is a dummy variable, the interpretation of b is a bit different from the interpretation of the slope of a
continuous predictor. Dummy variables’ slopes are comparisons between the two categories. Here, since
female is coded 0 and male is coded 1, the slope coefficient b = .948 indicates that males’ stops last an average
of .948 minutes longer than females’ stops. This finding, like that for age, makes sense in light of the fact that
males commit more crime than females do. Police might subject males to more scrutiny, which extends the
length of the stop.

The Beta column shows the standardized values. The utility of beta weights over b values is the ability to
compare the relative strength of each IV. Using the unstandardized slopes results in an apples-to-oranges type
of comparison. We are left not knowing whether age or gender is a stronger predictor of stop length. Beta
weights answer this question. You can see that β = .049 for sex and β = –.075 for age. Since –.075 represents a
stronger relationship than .049 does (it is the absolute value we are examining here), we can conclude that age
is the more impactful of the two. Still, –.075 is very small.

Research Example 14.2 Does Having a Close Black Friend Reduce Whites’ Concerns About Crime?

Mears, Mancini, and Stewart (2009) sought to uncover whether whites’ concerns about crime as a local and as a national problem
were affected by whether or not those whites had at least one close friend who was black. Concern about crime was the DV in this

422

study. White respondents expressed their attitudes about crime on a 4-point scale where higher values indicated greater concern.

The researchers ran an OLS regression model and arrived at the following results with respect to whites’ concerns about local crime.

The authors found, contrary to what they had hypothesized, that having a close friend who was black actually increased whites’

Step 2: χ2 distribution with df = (2 − 1)(2 − 1) = 1

Step 3: χ2
crit = 6.635. Decision rule: If χ2

obt > 6.635, the null will be rejected.

Step 4: Expected frequencies are 28.14 for cell A, 38.86 for B, 34.86 for C, and 48.14 for D. χ2
obt

= 23.76 + 17.21 + 19.18 + 13.89 = 74.04
Step 5: The obtained value is greater than 6.635, so the null is rejected. There is a relationship
between whether a jail offers alcohol treatment and whether it offers psychiatric counseling. Row
percentages can be used to show that 80.6% of jails that offer alcohol treatment provide
counseling, compared to only 10.8% of those that do not offer alcohol treatment. It appears that
most jails supply either both of these services or neither of them; relatively few provide only one.

11.

Step 1: H0: χ2 = 0; H1: χ2 > 0

Step 2: χ2 distribution with df = (2 − 1)(2 − 1) = 1

Step 3: χ2
crit= 3.841. Decision rule: If χ2

obt > 3.841, the null will be rejected.

Step 4: Expected frequencies are 314.73 for cell A, 1023.27 for B, 67.27 for C, and 218.73 for D.

χ2
obt = .09 + .03 + .41 + .13 = .66

Step 5: The obtained value is less than 3.841, so the null is retained. There is no relationship
between victims’ gender and the likelihood that their injuries resulted from fights. Row
percentages show that 23.9% of males’ injuries occurred during fights, compared to 21.7% of
females’ injuries. The two percentages are substantively similar to one another, and the small
difference between them appears to be a chance finding.

522

13.

Step 1: H0: χ2 = 0; H1: χ2 > 0

Step 2: χ2 distribution with df = (2 − 1)(2 − 1) = 1

Step 3: χ2
crit = 3.841. Decision rule: If χ2

obt > 3.841, the null will be rejected.

Step 4: Expected frequencies are 46.51 for cell A, 27.49 for B, 85.49 for C, and 50.51 for D. χ2
obt

= .26 + .44 + .14 + .24 = 1.08.
Step 5: The obtained value is less than 3.841, so the null is retained. Gender and support for
marijuana legalization are statistically independent among black Americans. Looking at row
percentages, 67.6% of men and 60.3% of women believe that marijuana should be made legal.
There appears to be more support for legalization by men than by women, but this difference is
not statistically significant (i.e., appears to be a chance finding).

15.

Step 1: H0: χ2 = 0; H1: χ2 > 0.

Step 2: χ2 distribution with df = (3 − 1)(3 − 1) = 4

Step 3: χ2
crit = 13.277. Decision rule: If χ2

obt > 13.277, the null will be rejected.

Step 4: Expected frequencies are 315.90 for cell A, 42.69 for B, 13.42 for C, 200.41 for D, 27.08

for E, 8.51 for F, 260.70 for G, 35.23 for H, and 11.07 for I. χ2
obt = .003 + .01 + .19 +.10 + .57 +

.03 + .11 + .30 + .39 = 1.70
Step 5: The obtained value is less than 13.277, so the null is retained. There is no relationship
between annual income and the frequency of contact with police. Row percentages show that 85%
of people in the lowest-income category, 83% of those in the middle-income category, and 87% of
those in the highest-income group had between zero and two recent contacts. The vast majority of
people have very few annual contacts with officers, irrespective of their income.

17.

1. The SPSS output shows χ2
obt = 16.125.

2. The null is rejected at an alpha of .05 because p = .003, and .003 is less than .05.
3. Race and attitudes about courts’ harshness are statistically dependent.
4. Asking SPSS for row percentages shows the majority of people in all racial groups think courts are

not harsh enough, but this percentage is higher among whites (63.0%) than blacks (55.6%) or
members of other racial groups (61.8%). Likewise, blacks are more likely than the other two
groups to say that the courts are overly harsh on offenders (25.9%). The applicable measures of
association are Cramer’s V, lambda, gamma, and tauc. All of these values show a fairly weak

relationship between these two variables. This makes sense, because people’s attitudes about
courts’ approach to crime control are too complex to be determined by race alone.

19.

1. The SPSS output shows χ2
obt = 25.759.

2. The null is rejected at an alpha of .05 because p = .003, and .003 < .05. 3. There is a relationship between race and perceived stop legitimacy. Asking SPSS for row percentages shows that 84.4% of white drivers, 71.3% of black drivers, and 85.5% of drivers of 523 other races thought the stop was for a legitimate reason. Black drivers appear to stand out from nonblack drivers in that they are less likely to believe their stop was legitimate. 4. The null was rejected, so measures of association can be examined. Since both variables are nominal and there is a clear independent and dependent designation, Cramer’s V and lambda are both available. The SPSS output shows that lambda = .000, meaning that the relationship between race and stop legitimacy, while statistically significant, is substantively meaningless. Cramer’s V = .107, also signaling a very weak relationship. This makes sense looking at the percentages from Part C. A clear majority of all drivers believed their stop was for a legitimate reason. Black drivers deviated somewhat, but a large majority still endorsed stop legitimacy. 524 Chapter 11 Note: Rounding, where applicable, is to two decimal places in each step of calculations and in the final answer. For numbers close to zero, decimals are extended to the first nonzero number. Calculation steps are identical to those in the text; using alternative sequences of steps might result in answers different from those presented here. These differences might or might not alter the final decision regarding the null. 1. 1. whether the defendant plead guilty or went to trial 2. nominal 3. sentence 4. ratio (there is no indication that the sample was narrowed only to those who were incarcerated, so theoretically, zeroes are possible) 3. 1. judge gender 2. nominal 3. sentence severity 4. ratio 5. a 7. a 8. t; z 11. b 13. Step 1: H0: μ1= μ2; H0: μ1≠ μ2. (Note: No direction of the difference was specified, so the alternative is ≠.) Step 2: t distribution with df = 155 + 463 − 2 = 616 Step 3: tcrit = ±1.960 (±1.980 would also be acceptable). The decision rule is that if tobt is greater than 1.960 or less than −1.960, the null will be rejected. Step 4: Step 5: Since tobt is not greater than 1.960 or less than −1.960, the null is retained. There is no difference between MHOs and SHOs in the diversity of the crimes they commit. In other words, there does not appear to be a relationship between offenders’ status as MHOs or SHOs and the variability in their criminal activity. 525 15. Step 1: H0: μ1= μ2; H1: μ1 < μ2. Step 2: t distribution with Step 3: tcrit = −1.658 (−1.645 would also be acceptable). The decision rule is that if tobt is less than −1.658, the null will be rejected. Step 4: and tobt = −1.79. Step 5: Since tobt is less than −1.658, the null is rejected. Juveniles younger than 16 at the time of arrest received significantly shorter mean jail sentences relative to juveniles who were older than 16 at arrest. In other words, there appears to be a relationship between age at arrest and sentence severity for juveniles transferred to adult court. 17. Step 1: H0: μ1= μ2; H1: μ1 > μ2.

Step 2: t distribution with df = 160 + 181− 2 = 339
Step 3: tcrit = ±1.960 (±1.980 would also be acceptable). The decision rule is that if tobt is greater

than 1.960 or less than −1.960, the null will be rejected.

Step 4:
Step 5: Since tobt is greater than 1.960, the null is rejected. There is a statistically significant

difference between property and drug offenders’ mean fines. In other words, there does not appear
to be a relationship between crime type and fine amount.

19.
Step 1: H0: μ1= μ2; H1: μ1 ≠ μ2

Step 2: t distribution with df = 5 − 1 = 4
Step 3: tcrit = ±2.776. The decision rule is that if tobt is greater than 2.776 or less than −2.776, the

526

null will be rejected.

Step 4:
Step 5: Since tobt is not greater than 2.776 or less than −2.776, the null is retained. There is no

difference between states with high and low arrest rates in terms of officer assault. In other words,
there does not appear to be a relationship between arrest rates and officer assaults.

21.
Step 1: H0: P1= P2; H1: P1 ≠ P2.

Step 2: z distribution.
Step 3: zcrit = ±1.96 (recall that .50 − .025 = .475). The decision rule is that if zobt is less than −1.96

or greater than 1.96, the null will be rejected.

Step 4:
Step 5: Since zobt is greater than 1.96, the null is rejected. There is a significant difference between

juveniles represented by public attorneys and those represented by private counsel in terms of the
time it takes for their cases to reach disposition. In other words, there appears to be a relationship
between attorney type and time-to-disposition among juvenile drug defendants.

23.
1. Equal/pooled variances. Levene’s F = 2.810 with a p value of .095. Since .095 > .05, the F statistic

is not significant at alpha = .05 (i.e., the null of equal variances is retained).
2. tobt = .977

3. No. The p value for tobt is .330, which well exceeds .01; therefore, the null is retained.

527

4. There is no statistically significant difference between daytime and nighttime stops in terms of
duration. That is, there seems to be no relationship between whether a stop takes place at day or
night and the length of time the stop lasts.

25.
1. Unequal/separate variances. Levene’s F = 36.062 with a p value of .000. Since .000 < .05, the F statistic is significant at alpha = .05 (i.e., the null of equal variances is rejected). 2. tobt = 8.095. 3. Yes. The p value for tobt is .000, which is less than .01; therefore, the null is retained. 4. There is a statistically significant difference between prosecutors’ offices that do and do not use DNA in plea negotiations and trials in the total number of felony convictions obtained each year. In other words, there is a relationship between DNA usage and total felony convictions. (Though one would suspect, of course, that this relationship is spurious and attributable to the fact that larger prosecutors’ offices process more cases and are more likely to use DNA as compared to smaller offices.) 528 Chapter 12 Note: Rounding, where applicable, is to two decimal places in each step of calculations and in the final answer. For numbers close to zero, decimals are extended to the first nonzero number. Calculation steps are identical to those in the text; using alternative sequences of steps might result in answers different from those presented here. These differences might or might not alter the final decision regarding the null. 1. 1. judges’ gender 2. nominal 3. sentence severity 4. ratio 5. independent-samples t test 3. 1. arrest 2. nominal 3. recidivism 4. ratio 5. ANOVA 5. 1. poverty 2. ordinal 3. crime rate 4. ratio 5. ANOVA 7. Within-groups variance measures the amount of variability present among different members of the same group. This type of variance is akin to white noise: it is the random fluctuations inevitably present in any group of people, objects, or places. Between-groups variance measures the extent to which groups differ from one another. This type of variance conveys information about whether or not there are actual differences between groups. 9. The F statistic can never be negative because it is a measure of variance and variance cannot be negative. Mathematically, variance is a squared measure; all negative numbers are squared during the course of calculations. The final result, then, is always positive. 11. Step 1: H0: μ1 = μ2 = μ3 and H1: some μi ≠ some μj. Step 2: F distribution with dfB = 3 − 1 = 2 and dfW = 21 − 3 = 18 Step 3: Fcrit = 3.55 and the decision rule is that if Fobt is greater than 3.55, the null will be rejected. 529 Step 4: SSB = 7(2.86 − 4.43)2 + 7(9.71 − 4.43)2 + 7(.71 − 4.43)2 = 7(2.46) + 7(27.88) + 7(13.84) = 309.26; SSw = 1,045.14 − 309.26 = 735.88; Step 5: Since Fobt is greater than 3.55, the null is rejected. There is a statistically significant difference in the number of wiretaps authorized per crime type. In other words, wiretaps vary significantly across crime types. Since the null was rejected, it is appropriate to examine omega squared: This means that 21% of the variance in wiretap authorizations is attributable to crime type. 13. Step 1: H0: μ1 = μ2 = μ3 = μ4 and H1: some μi ≠ some μj. Step 2: F distribution with dfB = 4 − 1 = 3 and dfW = 23 − 4 = 19 Step 3: Fcrit = 5.01 and the decision rule is that if Fobt is greater than 5.01, the null will be rejected. Step 4: SSB = 5(1.03 − 2.52)2 + 6(2.52 − 2.52)2 + 5(2.85 − 2.52)2 + 7(3.35 − 2.52)2 = 5(2.22) + 6(0) + 5(.11) + 7(.69) = 16.48; 530 SSw = 55.17 − 16.48 = 38.69 Step 5: Since Fobt is less than 5.01, the null is retained. There are no statistically significant differences between regions in terms of the percentage of officer assaults committed with firearms. In other words, there is no apparent relationship between region and firearm involvement in officer assaults. Since the null was retained, it is not appropriate to calculate omega squared. 15. Step 1: H0: μ1 = μ2 = μ3 = μ4 and H0: some μi ≠ some μj. Step 2: F distribution with dfB = 4 − 1 = 3 and dfW = 20 − 4 = 16. Step 3: Fcrit = 5.29 and the decision rule is that if Fobt is greater than 5.29, the null will be rejected. Step 4: SSB = 6(11.67 − 13.15)2 + 5(20.40 − 13.15)2 + 5(10.60 − 13.15)2 + 4(9.50 − 13.15)2 = 6(2.19) + 5(52.56) + 5(6.50) + 4(13.32) = 361.72; SSw = 4,536.55 − 361.72 = 4,174.83 Step 5: Since Fobt is less than 5.29, the null is retained. There are no statistically significant differences between juveniles of different races in the length of probation sentences they receive. 531 In other words, there is no apparent relationship between race and probation sentences among juvenile property offenders. Since the null was retained, it is not appropriate to calculate omega squared. 17. Step 1: H0: μ1 = μ2 = μ3 = μ4 and H1: some μi ≠ some μj. Step 2: F distribution with dfB = 4 − 1 = 3 and dfW = 20 − 4 = 16. Step 3: Fcrit = 3.24 and the decision rule is that if Fobt is greater than 3.24, the null will be rejected. Step 4: =37,062 – 34,777.80 = 2,284.20 SSB = 5(44.20 – 41.70)2 + 5(36.20 – 41.70)2 + 5(29.00 – 41.70)2 + 5(57.40 – 41.70)2 = 5(2.50)2 + 5(–5.50)2 + 5(–12.70)2 + 5(15.70)2 = 5(6.25) + 5(30.25) + 5(161.29) + 5(246.49) = 31.25 + 151.25 + 806.45+ 1,232.45 = 2,221.40 SSW = 2,284.20 − 2,221.40 = 62.80 Step 5: Since Fobt is greater than 3.24, the null is rejected. There is a statistically significant difference in the percentage of patrol personnel police managers tasked with responsibility for engaging in problem solving, depending on agency type. Since the null was rejected, it is appropriate to examine omega squared: 532 This means that 97% of the variance in patrol assignment to problem-oriented tasks exists between agencies (i.e., across groups). Agency type is an important predictor of the extent to which top managers allocate patrol personnel to problem solving. 19. 1. Fobt = 9.631. 2. Yes. The p value is .000, which is less than .01, so the null is rejected. 3. Among juvenile property defendants, there are significant differences between different racial groups in the amount of time it takes to acquire pretrial release. In other words, there is a relationship between race and time-to-release. 4. Since the null was rejected, post hoc tests can be examined. Tukey and Bonferroni tests show that there is one difference, and it lies between black and white youth. Group means reveal that black youths’ mean time-to-release is 40.36 and white youths’ is 20.80. Hispanics, with a mean of 30.30, appear to fall in the middle and are not significantly different from either of the other groups. 5. Since the null was rejected, it is correct to calculate omega squared: Only about 2.7% of the variance in time-to-release is attributable to race. (This means that important variables are missing! Knowing, for instance, juveniles’ offense types and prior records would likely improve our understanding of the timing of their release.) 21. 1. Fobt = 3.496. 2. The null would be rejected because p = .000, which is less than .05. 3. There is a relationship between shooters’ intentions and victims’ ages. In other words, victim age varies across assaults, accidents, and officer-involved shootings. 4. Since the null was rejected, post hoc tests can be examined. Tukey and Bonferroni both show that the mean age of people shot by police is significantly different from the mean ages of victims of assaults and accidental shootings. The group means are 28.99, 29.04, and 33.55 for people shot unintentionally, in assaults, and by police, respectively. This shows that people shot by police are significantly older than those shot in other circumstances. 5. It is appropriate to calculate and interpret omega squared. Using the SPSS output, This value of omega squared is nearly zero (.01%) and suggests that shooter intent explains a miniscule amount of the variance in victim age. 533 534 Chapter 13 Note: Rounding, where applicable, is to two decimal places in each step of calculations and in the final answer. For numbers close to zero, decimals are extended to the first nonzero number. Calculation steps are identical to those in the text; using alternative sequences of steps might result in answers different from those presented here. These differences might or might not alter the final decision regarding the null. 1. 1. parental incarceration 2. nominal 3. lifetime incarceration 4. nominal 5. chi-square 3. 1. participation in community meetings 2. nominal 3. self-protective measures 4. ratio 5. two-population z test for a difference between proportions 5. A linear relationship is one in which a single-unit increase in the independent variable is associated with a constant change in the dependent variable. In other words, the magnitude and the direction of the relationship remain constant across all levels of the independent variable. When graphed, the IV −DV overlap appears as a straight line. 7. The line of best fit is the line that minimizes the distance from that line to each of the raw values in the data set. That is, it is the line that produces the smallest deviation scores (or error). No other line would come closer to all of the data points in the sample. 9. c 11. Step 1: H0: ρ = 0 and H1: ρ ≠ 0. Step 2: t distribution with df = 5 − 2 = 3. 535 Step 4: Step 5: Since tobt is not greater than 3.182, the null is retained. There is no correlation between prison expenditures and violent crime rates. In other words, prison expenditures do not appear to impact violent crime rates. Because the null was retained, it is not appropriate to examine the sign, the magnitude, or the coefficient of determination. 13. Step 1: H0: ρ = 0 and H1: ρ > 0.

Step 2: t distribution with df = 9 − 2 = 7.
Step 3: tcrit = 1.895 and the decision rule is that if tcrit is greater than 1.895, the null will be

rejected.

Step 4:

Step 5: Since tobt is greater than 1.895, the null is rejected. There is a positive correlation between

536

crime concentration and concentration of police agencies. In other words, where there is more
crime, there are apparently more police agencies. Since the null was rejected, the sign, the
magnitude, and the coefficient of determination can be examined. The sign is positive, meaning
that a one-unit increase in the IV is associated with an increase in the DV. The magnitude is very
strong, judging by the guidelines offered in the text (where values between 0 and ±.29 are weak,
from about ±.30 to ±.49 are moderate, ±.50 to ±.69 are strong, and those beyond ±.70 are very

strong). The coefficient of determination is .752 = .56. This means that 56% of the variance in
police agencies can be attributed to crime rates.

15.
Step 1: H0: ρ = 0 and H1: ρ < 0. Step 2: t distribution with df = 7 − 2 = 5. Step 3: tcrit = −3.365 and the decision rule is that if tcrit is less than −3.365, the null will be rejected. Step 4: Step 5: Since tobt is not less than −3.365, the null is retained. There is no correlation between handgun and knife murder rates. In other words, murder handgun rates do not appear to affect knife murder rates. Because the null was retained, it is not appropriate to examine the sign, the magnitude, or the coefficient of determination. 17. Step 1: H0: ρ = 0 and H1: ρ ≠ 0. Step 2: t distribution with df = 8 − 2 = 6. Step 3: tcrit = ±2.447 and the decision rule is that if tcrit is less than −2.447 or greater than 2.447, the null will be rejected. 537 Step 4: Step 5: Since tobt is not greater than 2.447, the null is retained. There is no correlation between age and time-to-disposition among female juveniles. In other words, girls’ ages do not appear to affect the time it takes for their cases to reach adjudication. Because the null was retained, it is not appropriate to examine the sign, the magnitude, or the coefficient of determination. 19. 1. For age and contacts, r = .044; for age and length, r = − .113; for contacts and length, r = .312. 2. For age and contacts, the null is retained because .5556 > .05; for age and length, the null is

Pages (275 words)
Standard price: \$0.00
Client Reviews
4.9
Sitejabber
4.6
Trustpilot
4.8
Our Guarantees
100% Confidentiality
Information about customers is confidential and never disclosed to third parties.
Original Writing
We complete all papers from scratch. You can get a plagiarism report.
Timely Delivery
No missed deadlines – 97% of assignments are completed in time.
Money Back