The Psychology Institute

Understanding Case Study Method in Research: A Comprehensive Guide

case studies research methods in psychology

Table of Contents

Have you ever wondered how researchers uncover the nuanced layers of individual experiences or the intricate workings of a particular event? One of the keys to unlocking these mysteries lies in the case study method , a research strategy that might seem straightforward at first glance but is rich with complexity and insightful potential. Let’s dive into the world of case studies and discover why they are such a valuable tool in the arsenal of research methods.

What is a Case Study Method?

At its core, the case study method is a form of qualitative research that involves an in-depth, detailed examination of a single subject, such as an individual, group, organization, event, or phenomenon. It’s a method favored when the boundaries between phenomenon and context are not clearly evident, and where multiple sources of data are used to illuminate the case from various perspectives. This method’s strength lies in its ability to provide a comprehensive understanding of the case in its real-life context.

Historical Context and Evolution of Case Studies

Case studies have been around for centuries, with their roots in medical and psychological research. Over time, their application has spread to disciplines like sociology, anthropology, business, and education. The evolution of this method has been marked by a growing appreciation for qualitative data and the rich, contextual insights it can provide, which quantitative methods may overlook.

Characteristics of Case Study Research

What sets the case study method apart are its distinct characteristics:

  • Intensive Examination: It provides a deep understanding of the case in question, considering the complexity and uniqueness of each case.
  • Contextual Analysis: The researcher studies the case within its real-life context, recognizing that the context can significantly influence the phenomenon.
  • Multiple Data Sources: Case studies often utilize various data sources like interviews, observations, documents, and reports, which provide multiple perspectives on the subject.
  • Participant’s Perspective: This method often focuses on the perspectives of the participants within the case, giving voice to those directly involved.

Types of Case Studies

There are different types of case studies, each suited for specific research objectives:

  • Exploratory: These are conducted before large-scale research projects to help identify questions, select measurement constructs, and develop hypotheses.
  • Descriptive: These involve a detailed, in-depth description of the case, without attempting to determine cause and effect.
  • Explanatory: These are used to investigate cause-and-effect relationships and understand underlying principles of certain phenomena.
  • Intrinsic: This type is focused on the case itself because the case presents an unusual or unique issue.
  • Instrumental: Here, the case is secondary to understanding a broader issue or phenomenon.
  • Collective: These involve studying a group of cases collectively or comparably to understand a phenomenon, population, or general condition.

The Process of Conducting a Case Study

Conducting a case study involves several well-defined steps:

  • Defining Your Case: What or who will you study? Define the case and ensure it aligns with your research objectives.
  • Selecting Participants: If studying people, careful selection is crucial to ensure they fit the case criteria and can provide the necessary insights.
  • Data Collection: Gather information through various methods like interviews, observations, and reviewing documents.
  • Data Analysis: Analyze the collected data to identify patterns, themes, and insights related to your research question.
  • Reporting Findings: Present your findings in a way that communicates the complexity and richness of the case study, often through narrative.

Case Studies in Practice: Real-world Examples

Case studies are not just academic exercises; they have practical applications in every field. For instance, in business, they can explore consumer behavior or organizational strategies. In psychology, they can provide detailed insight into individual behaviors or conditions. Education often uses case studies to explore teaching methods or learning difficulties.

Advantages of Case Study Research

While the case study method has its critics, it offers several undeniable advantages:

  • Rich, Detailed Data: It captures data too complex for quantitative methods.
  • Contextual Insights: It provides a better understanding of the phenomena in its natural setting.
  • Contribution to Theory: It can generate and refine theory, offering a foundation for further research.

Limitations and Criticism

However, it’s important to acknowledge the limitations and criticisms:

  • Generalizability : Findings from case studies may not be widely generalizable due to the focus on a single case.
  • Subjectivity: The researcher’s perspective may influence the study, which requires careful reflection and transparency.
  • Time-Consuming: They require a significant amount of time to conduct and analyze properly.

Concluding Thoughts on the Case Study Method

The case study method is a powerful tool that allows researchers to delve into the intricacies of a subject in its real-world environment. While not without its challenges, when executed correctly, the insights garnered can be incredibly valuable, offering depth and context that other methods may miss. Robert K. Yin ’s advocacy for this method underscores its potential to illuminate and explain contemporary phenomena, making it an indispensable part of the researcher’s toolkit.

Reflecting on the case study method, how do you think its application could change with the advancements in technology and data analytics? Could such a traditional method be enhanced or even replaced in the future?

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Submit Comment

Research Methods in Psychology

1 Introduction to Psychological Research – Objectives and Goals, Problems, Hypothesis and Variables

  • Nature of Psychological Research
  • The Context of Discovery
  • Context of Justification
  • Characteristics of Psychological Research
  • Goals and Objectives of Psychological Research

2 Introduction to Psychological Experiments and Tests

  • Independent and Dependent Variables
  • Extraneous Variables
  • Experimental and Control Groups
  • Introduction of Test
  • Types of Psychological Test
  • Uses of Psychological Tests

3 Steps in Research

  • Research Process
  • Identification of the Problem
  • Review of Literature
  • Formulating a Hypothesis
  • Identifying Manipulating and Controlling Variables
  • Formulating a Research Design
  • Constructing Devices for Observation and Measurement
  • Sample Selection and Data Collection
  • Data Analysis and Interpretation
  • Hypothesis Testing
  • Drawing Conclusion

4 Types of Research and Methods of Research

  • Historical Research
  • Descriptive Research
  • Correlational Research
  • Qualitative Research
  • Ex-Post Facto Research
  • True Experimental Research
  • Quasi-Experimental Research

5 Definition and Description Research Design, Quality of Research Design

  • Research Design
  • Purpose of Research Design
  • Design Selection
  • Criteria of Research Design
  • Qualities of Research Design

6 Experimental Design (Control Group Design and Two Factor Design)

  • Experimental Design
  • Control Group Design
  • Two Factor Design

7 Survey Design

  • Survey Research Designs
  • Steps in Survey Design
  • Structuring and Designing the Questionnaire
  • Interviewing Methodology
  • Data Analysis
  • Final Report

8 Single Subject Design

  • Single Subject Design: Definition and Meaning
  • Phases Within Single Subject Design
  • Requirements of Single Subject Design
  • Characteristics of Single Subject Design
  • Types of Single Subject Design
  • Advantages of Single Subject Design
  • Disadvantages of Single Subject Design

9 Observation Method

  • Definition and Meaning of Observation
  • Characteristics of Observation
  • Types of Observation
  • Advantages and Disadvantages of Observation
  • Guides for Observation Method

10 Interview and Interviewing

  • Definition of Interview
  • Types of Interview
  • Aspects of Qualitative Research Interviews
  • Interview Questions
  • Convergent Interviewing as Action Research
  • Research Team

11 Questionnaire Method

  • Definition and Description of Questionnaires
  • Types of Questionnaires
  • Purpose of Questionnaire Studies
  • Designing Research Questionnaires
  • The Methods to Make a Questionnaire Efficient
  • The Types of Questionnaire to be Included in the Questionnaire
  • Advantages and Disadvantages of Questionnaire
  • When to Use a Questionnaire?

12 Case Study

  • Definition and Description of Case Study Method
  • Historical Account of Case Study Method
  • Designing Case Study
  • Requirements for Case Studies
  • Guideline to Follow in Case Study Method
  • Other Important Measures in Case Study Method
  • Case Reports

13 Report Writing

  • Purpose of a Report
  • Writing Style of the Report
  • Report Writing – the Do’s and the Don’ts
  • Format for Report in Psychology Area
  • Major Sections in a Report

14 Review of Literature

  • Purposes of Review of Literature
  • Sources of Review of Literature
  • Types of Literature
  • Writing Process of the Review of Literature
  • Preparation of Index Card for Reviewing and Abstracting

15 Methodology

  • Definition and Purpose of Methodology
  • Participants (Sample)
  • Apparatus and Materials

16 Result, Analysis and Discussion of the Data

  • Definition and Description of Results
  • Statistical Presentation
  • Tables and Figures

17 Summary and Conclusion

  • Summary Definition and Description
  • Guidelines for Writing a Summary
  • Writing the Summary and Choosing Words
  • A Process for Paraphrasing and Summarising
  • Summary of a Report
  • Writing Conclusions

18 References in Research Report

  • Reference List (the Format)
  • References (Process of Writing)
  • Reference List and Print Sources
  • Electronic Sources
  • Book on CD Tape and Movie
  • Reference Specifications
  • General Guidelines to Write References

Share on Mastodon

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Case Study? | Definition, Examples & Methods

What Is a Case Study? | Definition, Examples & Methods

Published on May 8, 2019 by Shona McCombes . Revised on November 20, 2023.

A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research.

A case study research design usually involves qualitative methods , but quantitative methods are sometimes also used. Case studies are good for describing , comparing, evaluating and understanding different aspects of a research problem .

Table of contents

When to do a case study, step 1: select a case, step 2: build a theoretical framework, step 3: collect your data, step 4: describe and analyze the case, other interesting articles.

A case study is an appropriate research design when you want to gain concrete, contextual, in-depth knowledge about a specific real-world subject. It allows you to explore the key characteristics, meanings, and implications of the case.

Case studies are often a good choice in a thesis or dissertation . They keep your project focused and manageable when you don’t have the time or resources to do large-scale research.

You might use just one complex case study where you explore a single subject in depth, or conduct multiple case studies to compare and illuminate different aspects of your research problem.

Case study examples
Research question Case study
What are the ecological effects of wolf reintroduction? Case study of wolf reintroduction in Yellowstone National Park
How do populist politicians use narratives about history to gain support? Case studies of Hungarian prime minister Viktor Orbán and US president Donald Trump
How can teachers implement active learning strategies in mixed-level classrooms? Case study of a local school that promotes active learning
What are the main advantages and disadvantages of wind farms for rural communities? Case studies of three rural wind farm development projects in different parts of the country
How are viral marketing strategies changing the relationship between companies and consumers? Case study of the iPhone X marketing campaign
How do experiences of work in the gig economy differ by gender, race and age? Case studies of Deliveroo and Uber drivers in London

Prevent plagiarism. Run a free check.

Once you have developed your problem statement and research questions , you should be ready to choose the specific case that you want to focus on. A good case study should have the potential to:

  • Provide new or unexpected insights into the subject
  • Challenge or complicate existing assumptions and theories
  • Propose practical courses of action to resolve a problem
  • Open up new directions for future research

TipIf your research is more practical in nature and aims to simultaneously investigate an issue as you solve it, consider conducting action research instead.

Unlike quantitative or experimental research , a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected, or outlying cases which may shed new light on the research problem.

Example of an outlying case studyIn the 1960s the town of Roseto, Pennsylvania was discovered to have extremely low rates of heart disease compared to the US average. It became an important case study for understanding previously neglected causes of heart disease.

However, you can also choose a more common or representative case to exemplify a particular category, experience or phenomenon.

Example of a representative case studyIn the 1920s, two sociologists used Muncie, Indiana as a case study of a typical American city that supposedly exemplified the changing culture of the US at the time.

While case studies focus more on concrete details than general theories, they should usually have some connection with theory in the field. This way the case study is not just an isolated description, but is integrated into existing knowledge about the topic. It might aim to:

  • Exemplify a theory by showing how it explains the case under investigation
  • Expand on a theory by uncovering new concepts and ideas that need to be incorporated
  • Challenge a theory by exploring an outlier case that doesn’t fit with established assumptions

To ensure that your analysis of the case has a solid academic grounding, you should conduct a literature review of sources related to the topic and develop a theoretical framework . This means identifying key concepts and theories to guide your analysis and interpretation.

There are many different research methods you can use to collect data on your subject. Case studies tend to focus on qualitative data using methods such as interviews , observations , and analysis of primary and secondary sources (e.g., newspaper articles, photographs, official records). Sometimes a case study will also collect quantitative data.

Example of a mixed methods case studyFor a case study of a wind farm development in a rural area, you could collect quantitative data on employment rates and business revenue, collect qualitative data on local people’s perceptions and experiences, and analyze local and national media coverage of the development.

The aim is to gain as thorough an understanding as possible of the case and its context.

In writing up the case study, you need to bring together all the relevant aspects to give as complete a picture as possible of the subject.

How you report your findings depends on the type of research you are doing. Some case studies are structured like a standard scientific paper or thesis , with separate sections or chapters for the methods , results and discussion .

Others are written in a more narrative style, aiming to explore the case from various angles and analyze its meanings and implications (for example, by using textual analysis or discourse analysis ).

In all cases, though, make sure to give contextual details about the case, connect it back to the literature and theory, and discuss how it fits into wider patterns or debates.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Case Study? | Definition, Examples & Methods. Scribbr. Retrieved September 4, 2024, from https://www.scribbr.com/methodology/case-study/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, primary vs. secondary sources | difference & examples, what is a theoretical framework | guide to organizing, what is action research | definition & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

psychology

Psychology Case Study Examples: A Deep Dive into Real-life Scenarios

Psychology Case Study Examples

Peeling back the layers of the human mind is no easy task, but psychology case studies can help us do just that. Through these detailed analyses, we’re able to gain a deeper understanding of human behavior, emotions, and cognitive processes. I’ve always found it fascinating how a single person’s experience can shed light on broader psychological principles.

Over the years, psychologists have conducted numerous case studies—each with their own unique insights and implications. These investigations range from Phineas Gage’s accidental lobotomy to Genie Wiley’s tragic tale of isolation. Such examples not only enlighten us about specific disorders or occurrences but also continue to shape our overall understanding of psychology .

As we delve into some noteworthy examples , I assure you’ll appreciate how varied and intricate the field of psychology truly is. Whether you’re a budding psychologist or simply an eager learner, brace yourself for an intriguing exploration into the intricacies of the human psyche.

Understanding Psychology Case Studies

Diving headfirst into the world of psychology, it’s easy to come upon a valuable tool used by psychologists and researchers alike – case studies. I’m here to shed some light on these fascinating tools.

Psychology case studies, for those unfamiliar with them, are in-depth investigations carried out to gain a profound understanding of the subject – whether it’s an individual, group or phenomenon. They’re powerful because they provide detailed insights that other research methods might miss.

Let me share a few examples to clarify this concept further:

  • One notable example is Freud’s study on Little Hans. This case study explored a 5-year-old boy’s fear of horses and related it back to Freud’s theories about psychosexual stages.
  • Another classic example is Genie Wiley (a pseudonym), a feral child who was subjected to severe social isolation during her early years. Her heartbreaking story provided invaluable insights into language acquisition and critical periods in development.

You see, what sets psychology case studies apart is their focus on the ‘why’ and ‘how’. While surveys or experiments might tell us ‘what’, they often don’t dig deep enough into the inner workings behind human behavior.

It’s important though not to take these psychology case studies at face value. As enlightening as they can be, we must remember that they usually focus on one specific instance or individual. Thus, generalizing findings from single-case studies should be done cautiously.

To illustrate my point using numbers: let’s say we have 1 million people suffering from condition X worldwide; if only 20 unique cases have been studied so far (which would be quite typical for rare conditions), then our understanding is based on just 0.002% of the total cases! That’s why multiple sources and types of research are vital when trying to understand complex psychological phenomena fully.

Number of People with Condition X Number Of Unique Cases Studied Percentage
1,000,000 20 0.002%

In the grand scheme of things, psychology case studies are just one piece of the puzzle – albeit an essential one. They provide rich, detailed data that can form the foundation for further research and understanding. As we delve deeper into this fascinating field, it’s crucial to appreciate all the tools at our disposal – from surveys and experiments to these insightful case studies.

Importance of Case Studies in Psychology

I’ve always been fascinated by the human mind, and if you’re here, I bet you are too. Let’s dive right into why case studies play such a pivotal role in psychology.

One of the key reasons they matter so much is because they provide detailed insights into specific psychological phenomena. Unlike other research methods that might use large samples but only offer surface-level findings, case studies allow us to study complex behaviors, disorders, and even treatments at an intimate level. They often serve as a catalyst for new theories or help refine existing ones.

To illustrate this point, let’s look at one of psychology’s most famous case studies – Phineas Gage. He was a railroad construction foreman who survived a severe brain injury when an iron rod shot through his skull during an explosion in 1848. The dramatic personality changes he experienced after his accident led to significant advancements in our understanding of the brain’s role in personality and behavior.

Moreover, it’s worth noting that some rare conditions can only be studied through individual cases due to their uncommon nature. For instance, consider Genie Wiley – a girl discovered at age 13 having spent most of her life locked away from society by her parents. Her tragic story gave psychologists valuable insights into language acquisition and critical periods for learning.

Finally yet importantly, case studies also have practical applications for clinicians and therapists. Studying real-life examples can inform treatment plans and provide guidance on how theoretical concepts might apply to actual client situations.

  • Detailed insights: Case studies offer comprehensive views on specific psychological phenomena.
  • Catalyst for new theories: Real-life scenarios help shape our understanding of psychology .
  • Study rare conditions: Unique cases can offer invaluable lessons about uncommon disorders.
  • Practical applications: Clinicians benefit from studying real-world examples.

In short (but without wrapping up), it’s clear that case studies hold immense value within psychology – they illuminate what textbooks often can’t, offering a more nuanced understanding of human behavior.

Different Types of Psychology Case Studies

Diving headfirst into the world of psychology, I can’t help but be fascinated by the myriad types of case studies that revolve around this subject. Let’s take a closer look at some of them.

Firstly, we’ve got what’s known as ‘Explanatory Case Studies’. These are often used when a researcher wants to clarify complex phenomena or concepts. For example, a psychologist might use an explanatory case study to explore the reasons behind aggressive behavior in children.

Second on our list are ‘Exploratory Case Studies’, typically utilized when new and unexplored areas of research come up. They’re like pioneers; they pave the way for future studies. In psychological terms, exploratory case studies could be conducted to investigate emerging mental health conditions or under-researched therapeutic approaches.

Next up are ‘Descriptive Case Studies’. As the name suggests, these focus on depicting comprehensive and detailed profiles about a particular individual, group, or event within its natural context. A well-known example would be Sigmund Freud’s analysis of “Anna O”, which provided unique insights into hysteria.

Then there are ‘Intrinsic Case Studies’, which delve deep into one specific case because it is intrinsically interesting or unique in some way. It’s sorta like shining a spotlight onto an exceptional phenomenon. An instance would be studying savants—individuals with extraordinary abilities despite significant mental disabilities.

Lastly, we have ‘Instrumental Case Studies’. These aren’t focused on understanding a particular case per se but use it as an instrument to understand something else altogether—a bit like using one puzzle piece to make sense of the whole picture!

So there you have it! From explanatory to instrumental, each type serves its own unique purpose and adds another intriguing layer to our understanding of human behavior and cognition.

Exploring Real-Life Psychology Case Study Examples

Let’s roll up our sleeves and delve into some real-life psychology case study examples. By digging deep, we can glean valuable insights from these studies that have significantly contributed to our understanding of human behavior and mental processes.

First off, let me share the fascinating case of Phineas Gage. This gentleman was a 19th-century railroad construction foreman who survived an accident where a large iron rod was accidentally driven through his skull, damaging his frontal lobes. Astonishingly, he could walk and talk immediately after the accident but underwent dramatic personality changes, becoming impulsive and irresponsible. This case is often referenced in discussions about brain injury and personality change.

Next on my list is Genie Wiley’s heart-wrenching story. She was a victim of severe abuse and neglect resulting in her being socially isolated until she was 13 years old. Due to this horrific experience, Genie couldn’t acquire language skills typically as other children would do during their developmental stages. Her tragic story offers invaluable insight into the critical periods for language development in children.

Then there’s ‘Little Hans’, a classic Freudian case that delves into child psychology. At just five years old, Little Hans developed an irrational fear of horses -or so it seemed- which Sigmund Freud interpreted as symbolic anxiety stemming from suppressed sexual desires towards his mother—quite an interpretation! The study gave us Freud’s Oedipus Complex theory.

Lastly, I’d like to mention Patient H.M., an individual who became amnesiac following surgery to control seizures by removing parts of his hippocampus bilaterally. His inability to form new memories post-operation shed light on how different areas of our brains contribute to memory formation.

Each one of these real-life psychology case studies gives us a unique window into understanding complex human behaviors better – whether it’s dissecting the role our brain plays in shaping personality or unraveling the mysteries of fear, language acquisition, and memory.

How to Analyze a Psychology Case Study

Diving headfirst into a psychology case study, I understand it can seem like an intimidating task. But don’t worry, I’m here to guide you through the process.

First off, it’s essential to go through the case study thoroughly. Read it multiple times if needed. Each reading will likely reveal new information or perspectives you may have missed initially. Look out for any patterns or inconsistencies in the subject’s behavior and make note of them.

Next on your agenda should be understanding the theoretical frameworks that might be applicable in this scenario. Is there a cognitive-behavioral approach at play? Or does psychoanalysis provide better insights? Comparing these theories with observed behavior and symptoms can help shed light on underlying psychological issues.

Now, let’s talk data interpretation. If your case study includes raw data like surveys or diagnostic tests results, you’ll need to analyze them carefully. Here are some steps that could help:

  • Identify what each piece of data represents
  • Look for correlations between different pieces of data
  • Compute statistics (mean, median, mode) if necessary
  • Use graphs or charts for visual representation

Keep in mind; interpreting raw data requires both statistical knowledge and intuition about human behavior.

Finally, drafting conclusions is key in analyzing a psychology case study. Based on your observations, evaluations of theoretical approaches and interpretations of any given data – what do you conclude about the subject’s mental health status? Remember not to jump to conclusions hastily but instead base them solidly on evidence from your analysis.

In all this journey of analysis remember one thing: every person is unique and so are their experiences! So while theories and previous studies guide us, they never define an individual completely.

Applying Lessons from Psychology Case Studies

Let’s dive into how we can apply the lessons learned from psychology case studies. If you’ve ever studied psychology, you’ll know that case studies offer rich insights. They shed light on human behavior, mental health issues, and therapeutic techniques. But it’s not just about understanding theory. It’s also about implementing these valuable lessons in real-world situations.

One of the most famous psychological case studies is Phineas Gage’s story. This 19th-century railroad worker survived a severe brain injury which dramatically altered his personality. From this study, we gained crucial insight into how different brain areas are responsible for various aspects of our personality and behavior.

  • Lesson: Recognizing that damage to specific brain areas can result in personality changes, enabling us to better understand certain mental conditions.

Sigmund Freud’s work with a patient known as ‘Anna O.’ is another landmark psychology case study. Anna displayed what was then called hysteria – symptoms included hallucinations and disturbances in speech and physical coordination – which Freud linked back to repressed memories of traumatic events.

  • Lesson: The importance of exploring an individual’s history for understanding their current psychological problems – a principle at the heart of psychoanalysis.

Then there’s Genie Wiley’s case – a girl who suffered extreme neglect resulting in impaired social and linguistic development. Researchers used her tragic circumstances as an opportunity to explore theories around language acquisition and socialization.

  • Lesson: Reinforcing the critical role early childhood experiences play in shaping cognitive development.

Lastly, let’s consider the Stanford Prison Experiment led by Philip Zimbardo examining how people conform to societal roles even when they lead to immoral actions.

  • Lesson: Highlighting that situational forces can drastically impact human behavior beyond personal characteristics or morality.

These examples demonstrate that psychology case studies aren’t just academic exercises isolated from daily life. Instead, they provide profound lessons that help us make sense of complex human behaviors, mental health issues, and therapeutic strategies. By understanding these studies, we’re better equipped to apply their lessons in our own lives – whether it’s navigating personal relationships, working with diverse teams at work or even self-improvement.

Challenges and Critiques of Psychological Case Studies

Delving into the world of psychological case studies, it’s not all rosy. Sure, they offer an in-depth understanding of individual behavior and mental processes. Yet, they’re not without their share of challenges and criticisms.

One common critique is the lack of generalizability. Each case study is unique to its subject. We can’t always apply what we learn from one person to everyone else. I’ve come across instances where results varied dramatically between similar subjects, highlighting the inherent unpredictability in human behavior.

Another challenge lies within ethical boundaries. Often, sensitive information surfaces during these studies that could potentially harm the subject if disclosed improperly. To put it plainly, maintaining confidentiality while delivering a comprehensive account isn’t always easy.

Distortion due to subjective interpretations also poses substantial difficulties for psychologists conducting case studies. The researcher’s own bias may color their observations and conclusions – leading to skewed outcomes or misleading findings.

Moreover, there’s an ongoing debate about the scientific validity of case studies because they rely heavily on qualitative data rather than quantitative analysis. Some argue this makes them less reliable or objective when compared with other research methods such as experiments or surveys.

To summarize:

  • Lack of generalizability
  • Ethical dilemmas concerning privacy
  • Potential distortion through subjective interpretation
  • Questions about scientific validity

While these critiques present significant challenges, they do not diminish the value that psychological case studies bring to our understanding of human behavior and mental health struggles.

Conclusion: The Impact of Case Studies in Understanding Human Behavior

Case studies play a pivotal role in shedding light on human behavior. Throughout this article, I’ve discussed numerous examples that illustrate just how powerful these studies can be. Yet it’s the impact they have on our understanding of human psychology where their true value lies.

Take for instance the iconic study of Phineas Gage. It was through his tragic accident and subsequent personality change that we began to grasp the profound influence our frontal lobes have on our behavior. Without such a case study, we might still be in the dark about this crucial aspect of our neurology.

Let’s also consider Genie, the feral child who showed us the critical importance of social interaction during early development. Her heartbreaking story underscores just how vital appropriate nurturing is for healthy mental and emotional growth.

Here are some key takeaways from these case studies:

  • Our brain structure significantly influences our behavior.
  • Social interaction during formative years is vital for normal psychological development.
  • Studying individual cases can reveal universal truths about human nature.

What stands out though, is not merely what these case studies teach us individually but collectively. They remind us that each person constitutes a unique combination of various factors—biological, psychological, and environmental—that shape their behavior.

One cannot overstate the significance of case studies in psychology—they are more than mere stories or isolated incidents; they’re windows into the complexities and nuances of human nature itself.

In wrapping up, I’d say that while statistics give us patterns and trends to understand groups, it’s these detailed narratives offered by case studies that help us comprehend individuals’ unique experiences within those groups—making them an invaluable part of psychological research.

Related Posts

Cracking the Anxious Avoidant Code

Cracking the Anxious-Avoidant Code

deflection

Deflection: Unraveling the Science Behind Material Bending

Case Study Research

  • First Online: 29 September 2022

Cite this chapter

case studies research methods in psychology

  • Robert E. White   ORCID: orcid.org/0000-0002-8045-164X 3 &
  • Karyn Cooper 4  

2340 Accesses

1 Citations

As a footnote to the previous chapter, there is such a beast known as the ethnographic case study. Ethnographic case study has found its way into this chapter rather than into the previous one because of grammatical considerations. Simply put, the “case study” part of the phrase is the noun (with “case” as an adjective defining what kind of study it is), while the “ethnographic” part of the phrase is an adjective defining the type of case study that is being conducted. As such, the case study becomes the methodology, while the ethnography part refers to a method, mode or approach relating to the development of the study.

The experiential account that we get from a case study or qualitative research of a similar vein is just so necessary. How things happen over time and the degree to which they are subject to personality and how they are only gradually perceived as tolerable or intolerable by the communities and the groups that are involved is so important. Robert Stake, University of Illinois, Urbana-Champaign

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bartlett, L., & Vavrus, F. (2017). Rethinking case study research . Routledge.

Google Scholar  

Bauman, Z. (2000). Liquid modernity . Polity Press.

Bhaskar, R., & Danermark, B. (2006). Metatheory, interdisciplinarity and disability research: A critical realist perspective. Scandinavian Journal of Disability Research, 8 (4), 278–297.

Article   Google Scholar  

Bulmer, M. (1986). The Chicago School of sociology: Institutionalization, diversity, and the rise of sociological research . University of Chicago Press.

Campbell, D. T. (1975). Degrees of freedom and the case study. Comparative Political Studies, 8 (1), 178–191.

Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research . Houghton Mifflin.

Chua, W. F. (1986). Radical developments in accounting thought. The Accounting Review, 61 (4), 601–632.

Creswell, J. W. (2013). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). Sage.

Creswell, J. W., & Poth, C. N. (2017). Qualitative inquiry and research design . Sage.

Davey, L. (1991). The application of case study evaluations. Practical Assessment, Research, & Evaluation 2 (9) . Retrieved May 28, 2018, from http://PAREonline.net/getvn.asp?v=2&n=9

Demetriou, H. (2017). The case study. In E. Wilson (Ed.), School-based research: A guide for education students (pp. 124–138). Sage.

Denzin, N. K., & Lincoln, Y. S. (2005). The Sage handbook of qualitative research . Sage.

Flyvbjerg, B. (2004). Five misunderstandings about case-study research. In C. Seale, G. Gobo, J. F. Gubrium, & D. Silverman (Eds.), Qualitative research practice (pp. 420–433). Sage.

Hamel, J., Dufour, S., & Fortin, D. (1993). Case study methods . Sage.

Book   Google Scholar  

Healy, M. E. (1947). Le Play’s contribution to sociology: His method. The American Catholic Sociological Review, 8 (2), 97–110.

Johansson, R. (2003). Case study methodology. [Keynote speech]. In International Conference “Methodologies in Housing Research.” Royal Institute of Technology, Stockholm, September 2003 (pp. 1–14).

Klonoski, R. (2013). The case for case studies: Deriving theory from evidence. Journal of Business Case Studies, 9 (31), 261–266.

McDonough, J., & McDonough, S. (1997). Research methods for English language teachers . Routledge.

Merriam, S. B. (1998). Qualitative research and case study applications in education . Jossey-Bass.

Miles, M. B. (1979). Qualitative data as an attractive nuisance: The problem of analysis. Administrative Science Quarterly, 24 (4), 590–601.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Sage.

Mills, A. J., Durepos, G. & E. Wiebe (Eds.) (2010). What is a case study? Encyclopedia of case study research, Volumes I and II. Sage.

National Film Board of Canada. (2012, April). Here at home: In search of the real cost of homelessness . [Web documentary]. Retrieved February 9, 2020, from http://athome.nfb.ca/#/athome/home

Popper, K. (2002). Conjectures and refutations: The growth of scientific knowledge . Routledge.

Ridder, H.-G. (2017). The theory contribution of case study research designs. Business Research, 10 (2), 281–305.

Rolls, G. (2005). Classic case studies in psychology . Hodder Education.

Seawright, J., & Gerring, J. (2008). Case-Selection techniques in case study research: A menu of qualitative and quantitative options. Political Research Quarterly, 61 , 294–308.

Stake, R. E. (1995). The art of case study research . Sage.

Stake, R. E. (2005). Multiple case study analysis . Guilford Press.

Swanborn, P. G. (2010). Case study research: What, why and how? Sage.

Thomas, W. I., & Znaniecki, F. (1996). The Polish peasant in Europe and America: A classic work in immigration history . University of Illinois Press.

Yin, R. K. (1981). The case study crisis: Some answers. Administrative Science Quarterly, 26 (1), 58–65.

Yin, R. K. (1991). Advancing rigorous methodologies : A Review of “Towards Rigor in Reviews of Multivocal Literatures….”. Review of Educational Research, 61 (3), 299–305.

Yin, R. K. (1999). Enhancing the quality of case studies in health services research. Health Services Research, 34 (5) Part II, 1209–1224.

Yin, R. K. (2012). Applications of case study research (3rd ed.). Sage.

Yin, R. K. (2014). Case study research: Design and methods (5th ed.). Sage.

Zaretsky, E. (1996). Introduction. In W. I. Thomas & F. Znaniecki (Eds.), The Polish peasant in Europe and America: A classic work in immigration history (pp. vii–xvii). University of Illinois Press.

Download references

Author information

Authors and affiliations.

Faculty of Education, St. Francis Xavier University, Antigonish, NS, Canada

Robert E. White

OISE, University of Toronto, Toronto, ON, Canada

Karyn Cooper

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Robert E. White .

A Case in Case Study Methodology

Christine Benedichte Meyer

Norwegian School of Economics and Business Administration

Meyer, C. B. (2001). A Case in Case Study Methodology. Field Methods 13 (4), 329-352.

The purpose of this article is to provide a comprehensive view of the case study process from the researcher’s perspective, emphasizing methodological considerations. As opposed to other qualitative or quantitative research strategies, such as grounded theory or surveys, there are virtually no specific requirements guiding case research. This is both the strength and the weakness of this approach. It is a strength because it allows tailoring the design and data collection procedures to the research questions. On the other hand, this approach has resulted in many poor case studies, leaving it open to criticism, especially from the quantitative field of research. This article argues that there is a particular need in case studies to be explicit about the methodological choices one makes. This implies discussing the wide range of decisions concerned with design requirements, data collection procedures, data analysis, and validity and reliability. The approach here is to illustrate these decisions through a particular case study of two mergers in the financial industry in Norway.

In the past few years, a number of books have been published that give useful guidance in conducting qualitative studies (Gummesson 1988; Cassell & Symon 1994; Miles & Huberman 1994; Creswell 1998; Flick 1998; Rossman & Rallis 1998; Bryman & Burgess 1999; Marshall & Rossman 1999; Denzin & Lincoln 2000). One approach often mentioned is the case study (Yin 1989). Case studies are widely used in organizational studies in the social science disciplines of sociology, industrial relations, and anthropology (Hartley 1994). Such a study consists of detailed investigation of one or more organizations, or groups within organizations, with a view to providing an analysis of the context and processes involved in the phenomenon under study.

As opposed to other qualitative or quantitative research strategies, such as grounded theory (Glaser and Strauss 1967) or surveys (Nachmias & Nachmias 1981), there are virtually no specific requirements guiding case research. Yin (1989) and Eisenhardt (1989) give useful insights into the case study as a research strategy, but leave most of the design decisions on the table. This is both the strength and the weakness of this approach. It is a strength because it allows tailoring the design and data collection procedures to the research questions. On the other hand, this approach has resulted in many poor case studies, leaving it open to criticism, especially from the quantitative field of research (Cook and Campbell 1979). The fact that the case study is a rather loose design implies that there are a number of choices that need to be addressed in a principled way.

Although case studies have become a common research strategy, the scope of methodology sections in articles published in journals is far too limited to give the readers a detailed and comprehensive view of the decisions taken in the particular studies, and, given the format of methodology sections, will remain so. The few books (Yin 1989, 1993; Hamel, Dufour, & Fortin 1993; Stake 1995) and book chapters on case studies (Hartley 1994; Silverman 2000) are, on the other hand, mainly normative and span a broad range of different kinds of case studies. One exception is Pettigrew (1990, 1992), who places the case study in the context of a research tradition (the Warwick process research).

Given the contextual nature of the case study and its strength in addressing contemporary phenomena in real-life contexts, I believe that there is a need for articles that provide a comprehensive overview of the case study process from the researcher’s perspective, emphasizing methodological considerations. This implies addressing the whole range of choices concerning specific design requirements, data collection procedures, data analysis, and validity and reliability.

WHY A CASE STUDY?

Case studies are tailor-made for exploring new processes or behaviors or ones that are little understood (Hartley 1994). Hence, the approach is particularly useful for responding to how and why questions about a contemporary set of events (Leonard-Barton 1990). Moreover, researchers have argued that certain kinds of information can be difficult or even impossible to tackle by means other than qualitative approaches such as the case study (Sykes 1990). Gummesson (1988:76) argues that an important advantage of case study research is the opportunity for a holistic view of the process: “The detailed observations entailed in the case study method enable us to study many different aspects, examine them in relation to each other, view the process within its total environment and also use the researchers’ capacity for ‘verstehen.’ ”

The contextual nature of the case study is illustrated in Yin’s (1993:59) definition of a case study as an empirical inquiry that “investigates a contemporary phenomenon within its real-life context and addresses a situation in which the boundaries between phenomenon and context are not clearly evident.”

The key difference between the case study and other qualitative designs such as grounded theory and ethnography (Glaser & Strauss 1967; Strauss & Corbin 1990; Gioia & Chittipeddi 1991) is that the case study is open to the use of theory or conceptual categories that guide the research and analysis of data. In contrast, grounded theory or ethnography presupposes that theoretical perspectives are grounded in and emerge from firsthand data. Hartley (1994) argues that without a theoretical framework, the researcher is in severe danger of providing description without meaning. Gummesson (1988) says that a lack of preunderstanding will cause the researcher to spend considerable time gathering basic information. This preunderstanding may arise from general knowledge such as theories, models, and concepts or from specific knowledge of institutional conditions and social patterns. According to Gummesson, the key is not to require researchers to have split but dual personalities: “Those who are able to balance on a razor’s edge using their pre-understanding without being its slave” (p. 58).

DESCRIPTION OF THE ILLUSTRATIVE STUDY

The study that will be used for illustrative purposes is a comparative and longitudinal case study of organizational integration in mergers and acquisitions taking place in Norway. The study had two purposes: (1) to identify contextual factors and features of integration that facilitated or impeded organizational integration, and (2) to study how the three dimensions of organizational integration (integration of tasks, unification of power, and integration of cultures and identities) interrelated and evolved over time. Examples of contextual factors were relative power, degree of friendliness, and economic climate. Integration features included factors such as participation, communication, and allocation of positions and functions.

Mergers and acquisitions are inherently complex. Researchers in the field have suggested that managers continuously underestimate the task of integrating the merging organizations in the postintegration process (Haspeslaph & Jemison 1991). The process of organizational integration can lead to sharp interorganizational conflict as the different top management styles, organizational and work unit cultures, systems, and other aspects of organizational life come into contact (Blake & Mounton 1985; Schweiger & Walsh 1990; Cartwright & Cooper 1993). Furthermore, cultural change in mergers and acquisitions is compounded by additional uncertainties, ambiguities, and stress inherent in the combination process (Buono & Bowditch 1989).

I focused on two combinations: one merger and one acquisition. The first case was a merger between two major Norwegian banks, Bergen Bank and DnC (to be named DnB), that started in the late 1980s. The second case was a study of a major acquisition in the insurance industry (i.e., Gjensidige’s acquisition of Forenede), that started in the early 1990s. Both combinations aimed to realize operational synergies though merging the two organizations into one entity. This implied disruption of organizational boundaries and threat to the existing power distribution and organizational cultures.

The study of integration processes in mergers and acquisitions illustrates the need to find a design that opens for exploration of sensitive issues such as power struggles between the two merging organizations. Furthermore, the inherent complexity in the integration process, involving integration of tasks, unification of power, and cultural integration stressed the need for in-depth study of the phenomenon over time. To understand the cultural integration process, the design also had to be linked to the past history of the two organizations.

DESIGN DECISIONS

In the introduction, I stressed that a case is a rather loose design that requires that a number of design choices be made. In this section, I go through the most important choices I faced in the study of organizational integration in mergers and acquisitions. These include: (1) selection of cases; (2) sampling time; (3) choosing business areas, divisions, and sites; and (4) selection of and choices regarding data collection procedures, interviews, documents, and observation.

Selection of Cases

There are several choices involved in selecting cases. First, there is the question of how many cases to include. Second, one must sample cases and decide on a unit of analysis. I will explore these issues subsequently.

Single or Multiple Cases

Case studies can involve single or multiple cases. The problem of single cases is limitations in generalizability and several information-processing biases (Eisenhardt 1989).

One way to respond to these biases is by applying a multi-case approach (Leonard-Barton 1990). Multiple cases augment external validity and help guard against observer biases. Moreover, multi-case sampling adds confidence to findings. By looking at a range of similar and contrasting cases, we can understand a single-case finding, grounding it by specifying how and where and, if possible, why it behaves as it does. (Miles & Huberman 1994)

Given these limitations of the single case study, it is desirable to include more than one case study in the study. However, the desire for depth and a pluralist perspective and tracking the cases over time implies that the number of cases must be fairly few. I chose two cases, which clearly does not support generalizability any more than does one case, but allows for comparison and contrast between the cases as well as a deeper and richer look at each case.

Originally, I planned to include a third case in the study. Due to changes in management during the initial integration process, my access to the case was limited and I left this case entirely. However, a positive side effect was that it allowed a deeper investigation of the two original cases and in hindsight turned out to be a good decision.

Sampling Cases

The logic of sampling cases is fundamentally different from statistical sampling. The logic in case studies involves theoretical sampling, in which the goal is to choose cases that are likely to replicate or extend the emergent theory or to fill theoretical categories and provide examples for polar types (Eisenhardt 1989). Hence, whereas quantitative sampling concerns itself with representativeness, qualitative sampling seeks information richness and selects the cases purposefully rather than randomly (Crabtree and Miller 1992).

The choice of cases was guided by George (1979) and Pettigrew’s (1990) recommendations. The aim was to find cases that matched the three dimensions in the dependent variable and provided variation in the contextual factors, thus representing polar cases.

To match the choice of outcome variable, organizational integration, I chose cases in which the purpose was to fully consolidate the merging parties’ operations. A full consolidation would imply considerable disruption in the organizational boundaries and would be expected to affect the task-related, political, and cultural features of the organizations. As for the contextual factors, the two cases varied in contextual factors such as relative power, friendliness, and economic climate. The DnB merger was a friendly combination between two equal partners in an unfriendly economic climate. Gjensidige’s acquisition of Forenede was, in contrast, an unfriendly and unbalanced acquisition in a friendly economic climate.

Unit of Analysis

Another way to respond to researchers’ and respondents’ biases is to have more than one unit of analysis in each case (Yin 1993). This implies that, in addition to developing contrasts between the cases, researchers can focus on contrasts within the cases (Hartley 1994). In case studies, there is a choice of a holistic or embedded design (Yin 1989). A holistic design examines the global nature of the phenomenon, whereas an embedded design also pays attention to subunit(s).

I used an embedded design to analyze the cases (i.e., within each case, I also gave attention to subunits and subprocesses). In both cases, I compared the combination processes in the various divisions and local networks. Moreover, I compared three distinct change processes in DnB: before the merger, during the initial combination, and two years after the merger. The overall and most important unit of analysis in the two cases was, however, the integration process.

Sampling Time

According to Pettigrew (1990), time sets a reference for what changes can be seen and how those changes are explained. When conducting a case study, there are several important issues to decide when sampling time. The first regards how many times data should be collected, while the second concerns when to enter the organizations. There is also a need to decide whether to collect data on a continuous basis or in distinct periods.

Number of data collections. I studied the process by collecting real time and retrospective data at two points in time, with one-and-a-half- and two-year intervals in the two cases. Collecting data twice had some interesting implications for the interpretations of the data. During the first data collection in the DnB study, for example, I collected retrospective data about the premerger and initial combination phase and real-time data about the second step in the combination process.

Although I gained a picture of how the employees experienced the second stage of the combination process, it was too early to assess the effects of this process at that stage. I entered the organization two years later and found interesting effects that I had not anticipated the first time. Moreover, it was interesting to observe how people’s attitudes toward the merger processes changed over time to be more positive and less emotional.

When to enter the organizations. It would be desirable to have had the opportunity to collect data in the precombination processes. However, researchers are rarely given access in this period due to secrecy. The emphasis in this study was to focus on the postcombination process. As such, the precombination events were classified as contextual factors. This implied that it was most important to collect real-time data after the parties had been given government approval to merge or acquire. What would have been desirable was to gain access earlier in the postcombination process. This was not possible because access had to be negotiated. Due to the change of CEO in the middle of the merger process and the need for renegotiating access, this took longer than expected.

Regarding the second case, I was restricted by the time frame of the study. In essence, I had to choose between entering the combination process as soon as governmental approval was given, or entering the organization at a later stage. In light of the previous studies in the field that have failed to go beyond the initial two years, and given the need to collect data about the cultural integration process, I chose the latter strategy. And I decided to enter the organizations at two distinct periods of time rather than on a continuous basis.

There were several reasons for this approach, some methodological and some practical. First, data collection on a continuous basis would have required use of extensive observation that I didn’t have access to, and getting access to two data collections in DnB was difficult in itself. Second, I had a stay abroad between the first and second data collection in Gjensidige. Collecting data on a continuous basis would probably have allowed for better mapping of the ongoing integration process, but the contrasts between the two different stages in the integration process that I wanted to elaborate would probably be more difficult to detect. In Table 1 I have listed the periods of time in which I collected data in the two combinations.

Sampling Business Areas, Divisions, and Sites

Even when the cases for a study have been chosen, it is often necessary to make further choices within each case to make the cases researchable. The most important criteria that set the boundaries for the study are importance or criticality, relevance, and representativeness. At the time of the data collection, my criteria for making these decisions were not as conscious as they may appear here. Rather, being restricted by time and my own capacity as a researcher, I had to limit the sites and act instinctively. In both cases, I decided to concentrate on the core businesses (criticality criterion) and left out the business units that were only mildly affected by the integration process (relevance criterion). In the choice of regional offices, I used the representativeness criterion as the number of offices widely exceeded the number of sites possible to study. In making these choices, I relied on key informants in the organizations.

SELECTION OF DATA COLLECTION PROCEDURES

The choice of data collection procedures should be guided by the research question and the choice of design. The case study approach typically combines data collection methods such as archives, interviews, questionnaires, and observations (Yin 1989). This triangulated methodology provides stronger substantiation of constructs and hypotheses. However, the choice of data collection methods is also subject to constraints in time, financial resources, and access.

I chose a combination of interviews, archives, and observation, with main emphasis on the first two. Conducting a survey was inappropriate due to the lack of established concepts and indicators. The reason for limited observation, on the other hand, was due to problems in obtaining access early in the study and time and resource constraints. In addition to choosing among several different data collection methods, there are a number of choices to be made for each individual method.

When relying on interviews as the primary data collection method, the issue of building trust between the researcher and the interviewees becomes very important. I addressed this issue by several means. First, I established a procedure of how to approach the interviewees. In most cases, I called them first, then sent out a letter explaining the key features of the project and outlining the broad issues to be addressed in the interview. In this letter, the support from the institution’s top management was also communicated. In most cases, the top management’s support of the project was an important prerequisite for the respondent’s input. Some interviewees did, however, fear that their input would be open to the top management without disguising the information source. Hence, it became important to communicate how I intended to use and store the information.

To establish trust, I also actively used my preunderstanding of the context in the first case and the phenomenon in the second case. As I built up an understanding of the cases, I used this information to gain confidence. The active use of my preunderstanding did, however, pose important challenges in not revealing too much of the research hypotheses and in balancing between asking open-ended questions and appearing knowledgeable.

There are two choices involved in conducting interviews. The first concerns the sampling of interviewees. The second is that you must decide on issues such as the structure of the interviews, use of tape recorder, and involvement of other researchers.

Sampling Interviewees

Following the desire for detailed knowledge of each case and for grasping different participant’s views the aim was, in line with Pettigrew (1990), to apply a pluralist view by describing and analyzing competing versions of reality as seen by actors in the combination processes.

I used four criteria for sampling informants. First, I drew informants from populations representing multiple perspectives. The first data collection in DnB was primarily focused on the top management level. Moreover, most middle managers in the first data collection were employed at the head offices, either in Bergen or Oslo. In the second data collection, I compensated for this skew by including eight local middle managers in the sample. The difference between the number of employees interviewed in DnB and Gjensidige was primarily due to the fact that Gjensidige has three unions, whereas DnB only has one. The distribution of interviewees is outlined in Table 2 .

The second criterion was to use multiple informants. According to Glick et al. (1990), an important advantage of using multiple informants is that the validity of information provided by one informant can be checked against that provided by other informants. Moreover, the validity of the data used by the researcher can be enhanced by resolving the discrepancies among different informants’ reports. Hence, I selected multiple respondents from each perspective.

Third, I focused on key informants who were expected to be knowledgeable about the combination process. These people included top management members, managers, and employees involved in the integration project. To validate the information from these informants, I also used a fourth criterion by selecting managers and employees who had been affected by the process but who were not involved in the project groups.

Structured versus unstructured. In line with the explorative nature of the study, the goal of the interviews was to see the research topic from the perspective of the interviewee, and to understand why he or she came to have this particular perspective. To meet this goal, King (1994:15) recommends that one have “a low degree of structure imposed on the interviewer, a preponderance of open questions, a focus on specific situations and action sequences in the world of the interviewee rather than abstractions and general opinions.” In line with these recommendations, the collection of primary data in this study consists of unstructured interviews.

Using tape recorders and involving other researchers. The majority of the interviews were tape-recorded, and I could thus concentrate fully on asking questions and responding to the interviewees’ answers. In the few interviews that were not tape-recorded, most of which were conducted in the first phase of the DnB-study, two researchers were present. This was useful as we were both able to discuss the interviews later and had feedback on the role of an interviewer.

In hindsight, however, I wish that these interviews had been tape-recorded to maintain the level of accuracy and richness of data. Hence, in the next phases of data collection, I tape-recorded all interviews, with two exceptions (people who strongly opposed the use of this device). All interviews that were tape-recorded were transcribed by me in full, which gave me closeness and a good grasp of the data.

When organizations merge or make acquisitions, there are often a vast number of documents to choose from to build up an understanding of what has happened and to use in the analyses. Furthermore, when firms make acquisitions or merge, they often hire external consultants, each of whom produces more documents. Due to time constraints, it is seldom possible to collect and analyze all these documents, and thus the researcher has to make a selection.

The choice of documentation was guided by my previous experience with merger and acquisition processes and the research question. Hence, obtaining information on the postintegration process was more important than gaining access to the due-diligence analysis. As I learned about the process, I obtained more documents on specific issues. I did not, however, gain access to all the documents I asked for, and, in some cases, documents had been lost or shredded.

The documents were helpful in a number of ways. First, and most important, they were used as inputs to the interview guide and saved me time, because I did not have to ask for facts in the interviews. They were also useful for tracing the history of the organizations and statements made by key people in the organizations. Third, the documents were helpful in counteracting the biases of the interviews. A list of the documents used in writing the cases is shown in Table 3 .

Observation

The major strength of direct observation is that it is unobtrusive and does not require direct interaction with participants (Adler and Adler 1994). Observation produces rigor when it is combined with other methods. When the researcher has access to group processes, direct observation can illuminate the discrepancies between what people said in the interviews and casual conversations and what they actually do (Pettigrew 1990).

As with interviews, there are a number of choices involved in conducting observations. Although I did some observations in the study, I used interviews as the key data collection source. Discussion in this article about observations will thus be somewhat limited. Nevertheless, I faced a number of choices in conducting observations, including type of observation, when to enter, how much observation to conduct, and which groups to observe.

The are four ways in which an observer may gather data: (1) the complete participant who operates covertly, concealing any intention to observe the setting; (2) the participant-as-observer, who forms relationships and participates in activities, but makes no secret of his or her intentions to observe events; (3) the observer-as-participant, who maintains only superficial contact with the people being studied; and (4) the complete observer, who merely stands back and eavesdrops on the proceedings (Waddington 1994).

In this study, I used the second and third ways of observing. The use of the participant-as-observer mode, on which much ethnographic research is based, was rather limited in the study. There were two reasons for this. First, I had limited time available for collecting data, and in my view interviews made more effective use of this limited time than extensive participant observation. Second, people were rather reluctant to let me observe these political and sensitive processes until they knew me better and felt I could be trusted. Indeed, I was dependent on starting the data collection before having built sufficient trust to observe key groups in the integration process. Nevertheless, Gjensidige allowed me to study two employee seminars to acquaint me with the organization. Here I admitted my role as an observer but participated fully in the activities. To achieve variation, I chose two seminars representing polar groups of employees.

As observer-as-participant, I attended a top management meeting at the end of the first data collection in Gjensidige and observed the respondents during interviews and in more informal meetings, such as lunches. All these observations gave me an opportunity to validate the data from the interviews. Observing the top management group was by far the most interesting and rewarding in terms of input.

Both DnB and Gjensidige started to open up for more extensive observation when I was about to finish the data collection. By then, I had built up the trust needed to undertake this approach. Unfortunately, this came a little late for me to take advantage of it.

DATA ANALYSIS

Published studies generally describe research sites and data-collection methods, but give little space to discuss the analysis (Eisenhardt 1989). Thus, one cannot follow how a researcher arrives at the final conclusions from a large volume of field notes (Miles and Huberman 1994).

In this study, I went through the stages by which the data were reduced and analyzed. This involved establishing the chronology, coding, writing up the data according to phases and themes, introducing organizational integration into the analysis, comparing the cases, and applying the theory. I will discuss these phases accordingly.

The first step in the analysis was to establish the chronology of the cases. To do this, I used internal and external documents. I wrote the chronologies up and included appendices in the final report.

The next step was to code the data into phases and themes reflecting the contextual factors and features of integration. For the interviews, this implied marking the text with a specific phase and a theme, and grouping the paragraphs on the same theme and phase together. I followed the same procedure in organizing the documents.

I then wrote up the cases using phases and themes to structure them. Before starting to write up the cases, I scanned the information on each theme, built up the facts and filled in with perceptions and reactions that were illustrative and representative of the data.

The documents were primarily useful in establishing the facts, but they also provided me with some perceptions and reactions that were validated in the interviews. The documents used included internal letters and newsletters as well as articles from the press. The interviews were less factual, as intended, and gave me input to assess perceptions and reactions. The limited observation was useful to validate the data from the interviews. The result of this step was two descriptive cases.

To make each case more analytical, I introduced the three dimensions of organizational integration—integration of tasks, unification of power, and cultural integration—into the analysis. This helped to focus the case and to develop a framework that could be used to compare the cases. The cases were thus structured according to phases, organizational integration, and themes reflecting the factors and features in the study.

I took all these steps to become more familiar with each case as an individual entity. According to Eisenhardt (1989:540), this is a process that “allows the unique patterns of each case to emerge before the investigators push to generalise patterns across cases. In addition it gives investigators a rich familiarity with each case which, in turn, accelerates cross-case comparison.”

The comparison between the cases constituted the next step in the analysis. Here, I used the categories from the case chapters, filled in the features and factors, and compared and contrasted the findings. The idea behind cross-case searching tactics is to force investigators to go beyond initial impressions, especially through the use of structural and diverse lenses on the data. These tactics improve the likelihood of accurate and reliable theory, that is, theory with a close fit to the data (Eisenhardt 1989).

As a result, I had a number of overall themes, concepts, and relationships that had emerged from the within-case analysis and cross-case comparisons. The next step was to compare these emergent findings with theory from the organizational field of mergers and acquisitions, as well as other relevant perspectives.

This method of generalization is known as analytical generalization. In this approach, a previously developed theory is used as a template with which to compare the empirical results of the case study (Yin 1989). This comparison of emergent concepts, theory, or hypotheses with the extant literature involves asking what it is similar to, what it contradicts, and why. The key to this process is to consider a broad range of theory (Eisenhardt 1989). On the whole, linking emergent theory to existent literature enhances the internal validity, generalizability, and theoretical level of theory-building from case research.

According to Eisenhardt (1989), examining literature that conflicts with the emergent literature is important for two reasons. First, the chance of neglecting conflicting findings is reduced. Second, “conflicting results forces researchers into a more creative, frame-breaking mode of thinking than they might otherwise be able to achieve” (p. 544). Similarly, Eisenhardt (1989) claims that literature discussing similar findings is important because it ties together underlying similarities in phenomena not normally associated with each other. The result is often a theory with a stronger internal validity, wider generalizability, and a higher conceptual level.

The analytical generalization in the study included exploring and developing the concepts and examining the relationships between the constructs. In carrying out this analytical generalization, I acted on Eisenhardt’s (1989) recommendation to use a broad range of theory. First, I compared and contrasted the findings with the organizational stream on mergers and acquisition literature. Then I discussed other relevant literatures, including strategic change, power and politics, social justice, and social identity theory to explore how these perspectives could contribute to the understanding of the findings. Finally, I discussed the findings that could not be explained either by the merger and acquisition literature or the four theoretical perspectives.

In every scientific study, questions are raised about whether the study is valid and reliable. The issues of validity and reliability in case studies are just as important as for more deductive designs, but the application is fundamentally different.

VALIDITY AND RELIABILITY

The problems of validity in qualitative studies are related to the fact that most qualitative researchers work alone in the field, they focus on the findings rather than describe how the results were reached, and they are limited in processing information (Miles and Huberman 1994).

Researchers writing about qualitative methods have questioned whether the same criteria can be used for qualitative and quantitative studies (Kirk & Miller 1986; Sykes 1990; Maxwell 1992). The problem with the validity criteria suggested in qualitative research is that there is little consistency across the articles as each author suggests a new set of criteria.

One approach in examining validity and reliability is to apply the criteria used in quantitative research. Hence, the criteria to be examined here are objectivity/intersubjectivity, construct validity, internal validity, external validity, and reliability.

Objectivity/Intersubjectivity

The basic issue of objectivity can be framed as one of relative neutrality and reasonable freedom from unacknowledged research biases (Miles & Huberman 1994). In a real-time longitudinal study, the researcher is in danger of losing objectivity and of becoming too involved with the organization, the people, and the process. Hence, Leonard-Barton (1990) claims that one may be perceived as, and may even become, an advocate rather than an observer.

According to King (1994), however, qualitative research, in seeking to describe and make sense of the world, does not require researchers to strive for objectivity and distance themselves from research participants. Indeed, to do so would make good qualitative research impossible, as the interviewer’s sensitivity to subjective aspects of his or her relationship with the interviewee is an essential part of the research process (King 1994:31).

This does not imply, however, that the issue of possible research bias can be ignored. It is just as important as in a structured quantitative interview that the findings are not simply the product of the researcher’s prejudices and prior experience. One way to guard against this bias is for the researcher to explicitly recognize his or her presuppositions and to make a conscious effort to set these aside in the analysis (Gummesson 1988). Furthermore, rival conclusions should be considered (Miles & Huberman 1994).

My experience from the first phase of the DnB study was that it was difficult to focus the questions and the analysis of the data when the research questions were too vague and broad. As such, developing a framework before collecting the data for the study was useful in guiding the collection and analysis of data. Nevertheless, it was important to be open-minded and receptive to new and surprising data. In the DnB study, for example, the positive effect of the reorganization process on the integration of cultures came as a complete surprise to me and thus needed further elaboration.

I also consciously searched for negative evidence and problems by interviewing outliers (Miles & Huberman 1994) and asking problem-oriented questions. In Gjensidige, the first interviews with the top management revealed a much more positive perception of the cultural integration process than I had expected. To explore whether this was a result of overreliance on elite informants, I continued posing problem-oriented questions to outliers and people at lower levels in the organization. Moreover, I told them about the DnB study to be explicit about my presuppositions.

Another important issue when assessing objectivity is whether other researchers can trace the interpretations made in the case studies, or what is called intersubjectivity. To deal with this issue, Miles & Huberman (1994) suggest that: (1) the study’s general methods and procedures should be described in detail, (2) one should be able to follow the process of analysis, (3) conclusions should be explicitly linked with exhibits of displayed data, and (4) the data from the study should be made available for reanalysis by others.

In response to these requirements, I described the study’s data collection procedures and processing in detail. Then, the primary data were displayed in the written report in the form of quotations and extracts from documents to support and illustrate the interpretations of the data. Because the study was written up in English, I included the Norwegian text in a separate appendix. Finally, all the primary data from the study were accessible for a small group of distinguished researchers.

Construct Validity

Construct validity refers to whether there is substantial evidence that the theoretical paradigm correctly corresponds to observation (Kirk & Miller 1986). In this form of validity, the issue is the legitimacy of the application of a given concept or theory to established facts.

The strength of qualitative research lies in the flexible and responsive interaction between the interviewer and the respondents (Sykes 1990). Thus, meaning can be probed, topics covered easily from a number of angles, and questions made clear for respondents. This is an advantage for exploring the concepts (construct or theoretical validity) and the relationships between them (internal validity). Similarly, Hakim (1987) says the great strength of qualitative research is the validity of data obtained because individuals are interviewed in sufficient detail for the results to be taken as true, correct, and believable reports of their views and experiences.

Construct validity can be strengthened by applying a longitudinal multicase approach, triangulation, and use of feedback loops. The advantage of applying a longitudinal approach is that one gets the opportunity to test sensitivity of construct measures to the passage of time. Leonard-Barton (1990), for example, found that one of her main constructs, communicability, varied across time and relative to different groups of users. Thus, the longitudinal study aided in defining the construct more precisely. By using more than one case study, one can validate stability of construct across situations (Leonard-Barton 1990). Since my study only consists of two case studies, the opportunity to test stability of constructs across cases is somewhat limited. However, the use of more than one unit of analysis helps to overcome this limitation.

Construct validity is strengthened by the use of multiple sources of evidence to build construct measures, which define the construct and distinguish it from other constructs. These multiple sources of evidence can include multiple viewpoints within and across the data sources. My study responds to these requirements in its sampling of interviewees and uses of multiple data sources.

Use of feedback loops implies returning to interviewees with interpretations and developing theory and actively seeking contradictions in data (Crabtree & Miller 1992; King 1994). In DnB, the written report had to be approved by the bank’s top management after the first data collection. Apart from one minor correction, the bank had no objections to the established facts. In their comments on my analysis, some of the top managers expressed the view that the political process had been overemphasized, and that the CEO’s role in initiating a strategic process was undervalued. Hence, an important objective in the second data collection was to explore these comments further. Moreover, the report was not as positive as the management had hoped for, and negotiations had to be conducted to publish the report. The result of these negotiations was that publication of the report was postponed one-and-a-half years.

The experiences from the first data collection in the DnB had some consequences. I was more cautious and brought up the problems of confidentiality and the need to publish at the outset of the Gjensidige study. Also, I had to struggle to get access to the DnB case for the second data collection and some of the information I asked for was not released. At Gjensidige, I sent a preliminary draft of the case chapter to the corporation’s top management for comments, in addition to having second interviews with a small number of people. Beside testing out the factual description, these sessions gave me the opportunity to test out the theoretical categories established as a result of the within-case analysis.

Internal Validity

Internal validity concerns the validity of the postulated relationships among the concepts. The main problem of internal validity as a criterion in qualitative research is that it is often not open to scrutiny. According to Sykes (1990), the researcher can always provide a plausible account and, with careful editing, may ensure its coherence. Recognition of this problem has led to calls for better documentation of the processes of data collection, the data itself, and the interpretative contribution of the researcher. The discussion of how I met these requirements was outlined in the section on objectivity/subjectivity above.

However, there are some advantages in using qualitative methods, too. First, the flexible and responsive methods of data collection allow cross-checking and amplification of information from individual units as it is generated. Respondents’ opinions and understandings can be thoroughly explored. The internal validity results from strategies that eliminate ambiguity and contradiction, filling in detail and establishing strong connections in data.

Second, the longitudinal study enables one to track cause and effect. Moreover, it can make one aware of intervening variables (Leonard-Barton 1990). Eisenhardt (1989:542) states, “Just as hypothesis testing research an apparent relationship may simply be a spurious correlation or may reflect the impact of some third variable on each of the other two. Therefore, it is important to discover the underlying reasons for why the relationship exists.”

Generalizability

According to Mitchell (1983), case studies are not based on statistical inference. Quite the contrary, the inferring process turns exclusively on the theoretically necessary links among the features in the case study. The validity of the extrapolation depends not on the typicality or representativeness of the case but on the cogency of the theoretical reasoning. Hartley (1994:225) claims, “The detailed knowledge of the organization and especially the knowledge about the processes underlying the behaviour and its context can help to specify the conditions under which behaviour can be expected to occur. In other words, the generalisation is about theoretical propositions not about populations.”

Generalizability is normally based on the assumption that this theory may be useful in making sense of similar persons or situations (Maxwell 1992). One way to increase the generalizability is to apply a multicase approach (Leonard-Barton 1990). The advantage of this approach is that one can replicate the findings from one case study to another. This replication logic is similar to that used on multiple experiments (Yin 1993).

Given the choice of two case studies, the generalizability criterion is not supported in this study. Through the discussion of my choices, I have tried to show that I had to strike a balance between the need for depth and mapping changes over time and the number of cases. In doing so, I deliberately chose to provide a deeper and richer look at each case, allowing the reader to make judgments about the applicability rather than making a case for generalizability.

Reliability

Reliability focuses on whether the process of the study is consistent and reasonably stable over time and across researchers and methods (Miles & Huberman 1994). In the context of qualitative research, reliability is concerned with two questions (Sykes 1990): Could the same study carried out by two researchers produce the same findings? and Could a study be repeated using the same researcher and respondents to yield the same findings?

The problem of reliability in qualitative research is that differences between replicated studies using different researchers are to be expected. However, while it may not be surprising that different researchers generate different findings and reach different conclusions, controlling for reliability may still be relevant. Kirk and Miller’s (1986:311) definition takes into account the particular relationship between the researcher’s orientation, the generation of data, and its interpretation:

For reliability to be calculated, it is incumbent on the scientific investigator to document his or her procedure. This must be accomplished at such a level of abstraction that the loci of decisions internal to the project are made apparent. The curious public deserves to know how the qualitative researcher prepares him or herself for the endeavour, and how the data is collected and analysed.

The study addresses these requirements by discussing my point of departure regarding experience and framework, the sampling and data collection procedures, and data analysis.

Case studies often lack academic rigor and are, as such, regarded as inferior to more rigorous methods where there are more specific guidelines for collecting and analyzing data. These criticisms stress that there is a need to be very explicit about the choices one makes and the need to justify them.

One reason why case studies are criticized may be that researchers disagree about the definition and the purpose of carrying out case studies. Case studies have been regarded as a design (Cook and Campbell 1979), as a qualitative methodology (Cassell and Symon 1994), as a particular data collection procedure (Andersen 1997), and as a research strategy (Yin 1989). Furthermore, the purpose for carrying out case studies is unclear. Some regard case studies as supplements to more rigorous qualitative studies to be carried out in the early stage of the research process; others claim that it can be used for multiple purposes and as a research strategy in its own right (Gummesson 1988; Yin 1989). Given this unclear status, researchers need to be very clear about their interpretation of the case study and the purpose of carrying out the study.

This article has taken Yin’s (1989) definition of the case study as a research strategy as a starting point and argued that the choice of the case study should be guided by the research question(s). In the illustrative study, I used a case study strategy because of a need to explore sensitive, ill-defined concepts in depth, over time, taking into account the context and history of the mergers and the existing knowledge about the phenomenon. However, the choice of a case study strategy extended rather than limited the number of decisions to be made. In Schramm’s (1971, cited in Yin 1989:22–23) words, “The essence of a case study, the central tendency among all types of case study, is that it tries to illuminate a decision or set of decisions, why they were taken, how they were implemented, and with what result.”

Hence, the purpose of this article has been to illustrate the wide range of decisions that need to be made in the context of a particular case study and to discuss the methodological considerations linked to these decisions. I argue that there is a particular need in case studies to be explicit about the methodological choices one makes and that these choices can be best illustrated through a case study of the case study strategy.

As in all case studies, however, there are limitations to the generalizability of using one particular case study for illustrative purposes. As such, the strength of linking the methodological considerations to a specific context and phenomenon also becomes a weakness. However, I would argue that the questions raised in this article are applicable to many case studies, but that the answers are very likely to vary. The design choices are shown in Table 4 . Hence, researchers choosing a longitudinal, comparative case study need to address the same set of questions with regard to design, data collection procedures, and analysis, but they are likely to come up with other conclusions, given their different research questions.

Adler, P. A., and P. Adler. 1994. Observational techniques. In Handbook of qualitative research, edited by N. K. Denzin and Y. S. Lincoln, 377–92. London: Sage.

Andersen, S. S. 1997. Case-studier og generalisering: Forskningsstrategi og design (Case studies and generalization: Research strategy and design). Bergen, Norway: Fagbokforlaget.

Blake, R. R., and J. S. Mounton. 1985. How to achieve integration on the human side of the merger. Organizational Dynamics 13 (3): 41–56.

Bryman, A., and R. G. Burgess. 1999. Qualitative research. London: Sage.

Buono, A. F., and J. L. Bowditch. 1989. The human side of mergers and acquisitions. San Francisco: Jossey-Bass.

Cartwright, S., and C. L. Cooper. 1993. The psychological impact of mergers and acquisitions on the individual: A study of building society managers. Human Relations 46 (3): 327–47.

Cassell, C., and G. Symon, eds. 1994. Qualitative methods in organizational research: A practical guide. London: Sage.

Cook, T. D., and D. T. Campbell. 1979. Quasi experimentation: Design & analysis issues for field settings. Boston: Houghton Mifflin.

Crabtree, B. F., and W. L. Miller. 1992. Primary care research: A multimethod typology and qualitative road map. In Doing qualitative research: Methods for primary care, edited by B. F. Crabtree and W. L. Miller, 3–28. Vol. 3. Thousand Oaks, CA: Sage.

Creswell, J. W. 1998. Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, CA: Sage.

Denzin, N. K., and L. S. Lincoln. 2000. Handbook of qualitative research. London: Sage.

Eisenhardt, K. M. 1989. Building theories from case study research. Academy of Management Review 14 (4): 532–50.

Flick, U. 1998. An introduction to qualitative research. London: Sage.

George, A. L. 1979. Case studies and theory development: The method of structured, focused comparison. In Diplomacy: New approaches in history, theory, and policy, edited by P. G. Lauren, 43–68. New York: Free Press.

Gioia, D. A., and K. Chittipeddi. 1991. Sensemaking and sensegiving in strategic change initiation. Strategic Management Journal 12:433–48.

Glaser, B. G., and A. L. Strauss. 1967. The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine.

Glick, W. H, G. P. Huber, C. C. Miller, D. H. Doty, and K. M. Sutcliffe. 1990. Studying changes in organizational design and effectiveness: Retrospective event histories and periodic assessments. Organization Science 1 (3): 293–312.

Gummesson, E. 1988. Qualitative methods in management research. Lund, Norway: Studentlitteratur, Chartwell-Bratt.

Hakim, C. 1987. Research design. Strategies and choices in the design of social research. Boston: Unwin Hyman.

Hamel, J., S. Dufour, and D. Fortin. 1993. Case study methods. London: Sage.

Hartley, J. F. 1994. Case studies in organizational research. In Qualitative methods in organizational research: A practical guide, edited by C. Cassell and G. Symon, 209–29. London: Sage.

Haspeslaph, P., and D. B. Jemison. 1991. The challenge of renewal through acquisitions. Planning Review 19 (2): 27–32.

King, N. 1994. The qualitative research interview. In Qualitative methods in organizational research: A practical guide, edited by C. Cassell and G. Symon, 14–36. London: Sage.

Kirk, J., and M. L. Miller. 1986. Reliability and validity in qualitative research. Qualitative Research Methods Series 1. London: Sage.

Leonard-Barton, D. 1990.Adual methodology for case studies: Synergistic use of a longitudinal single site with replicated multiple sites. Organization Science 1 (3): 248–66.

Marshall, C., and G. B. Rossman. 1999. Designing qualitative research. London: Sage.

Maxwell, J. A. 1992. Understanding and validity in qualitative research. Harvard Educational Review 62 (3): 279–99.

Miles, M. B., and A. M. Huberman. 1994. Qualitative data analysis. 2d ed. London: Sage.

Mitchell, J. C. 1983. Case and situation analysis. Sociology Review 51 (2): 187–211.

Nachmias, C., and D. Nachmias. 1981. Research methods in the social sciences. London: Edward Arnhold.

Pettigrew, A. M. 1990. Longitudinal field research on change: Theory and practice. Organization Science 1 (3): 267–92.

___. (1992). The character and significance of strategic process research. Strategic Management Journal 13:5–16.

Rossman, G. B., and S. F. Rallis. 1998. Learning in the field: An introduction to qualitative research. Thousand Oaks, CA: Sage.

Schramm, W. 1971. Notes on case studies for instructional media projects. Working paper for Academy of Educational Development, Washington DC.

Schweiger, D. M., and J. P. Walsh. 1990. Mergers and acquisitions: An interdisciplinary view. In Research in personnel and human resource management, edited by G. R. Ferris and K. M. Rowland, 41–107. Greenwich, CT: JAI.

Silverman, D. 2000. Doing qualitative research: A practical handbook. London: Sage.

Stake, R. E. 1995. The art of case study research. London: Sage.

Strauss, A. L., and J. Corbin. 1990. Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage.

Sykes, W. 1990. Validity and reliability in qualitative market research: A review of the literature. Journal of the Market Research Society 32 (3): 289–328.

Waddington, D. 1994. Participant observation. In Qualitative methods in organizational research, edited by C. Cassell and G. Symon, 107–22. London: Sage.

Yin, R. K. 1989. Case study research: Design and methods. Applied Social Research Series, Vol. 5. London: Sage.

___. 1993. Applications of case study research. Applied Social Research Series, Vol. 34. London: Sage.

Christine Benedichte Meyer is an associate professor in the Department of Strategy and Management in the Norwegian School of Economics and Business Administration, Bergen-Sandviken, Norway. Her research interests are mergers and acquisitions, strategic change, and qualitative research. Recent publications include: “Allocation Processes in Mergers and Acquisitions: An Organisational Justice Perspective” (British Journal of Management 2001) and “Motives for Acquisitions in the Norwegian Financial Industry” (CEMS Business Review 1997).

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

White, R.E., Cooper, K. (2022). Case Study Research. In: Qualitative Research in the Post-Modern Era. Springer, Cham. https://doi.org/10.1007/978-3-030-85124-8_7

Download citation

DOI : https://doi.org/10.1007/978-3-030-85124-8_7

Published : 29 September 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-85126-2

Online ISBN : 978-3-030-85124-8

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Ch 2: Psychological Research Methods

Children sit in front of a bank of television screens. A sign on the wall says, “Some content may not be suitable for children.”

Have you ever wondered whether the violence you see on television affects your behavior? Are you more likely to behave aggressively in real life after watching people behave violently in dramatic situations on the screen? Or, could seeing fictional violence actually get aggression out of your system, causing you to be more peaceful? How are children influenced by the media they are exposed to? A psychologist interested in the relationship between behavior and exposure to violent images might ask these very questions.

The topic of violence in the media today is contentious. Since ancient times, humans have been concerned about the effects of new technologies on our behaviors and thinking processes. The Greek philosopher Socrates, for example, worried that writing—a new technology at that time—would diminish people’s ability to remember because they could rely on written records rather than committing information to memory. In our world of quickly changing technologies, questions about the effects of media continue to emerge. Is it okay to talk on a cell phone while driving? Are headphones good to use in a car? What impact does text messaging have on reaction time while driving? These are types of questions that psychologist David Strayer asks in his lab.

Watch this short video to see how Strayer utilizes the scientific method to reach important conclusions regarding technology and driving safety.

You can view the transcript for “Understanding driver distraction” here (opens in new window) .

How can we go about finding answers that are supported not by mere opinion, but by evidence that we can all agree on? The findings of psychological research can help us navigate issues like this.

Introduction to the Scientific Method

Learning objectives.

  • Explain the steps of the scientific method
  • Describe why the scientific method is important to psychology
  • Summarize the processes of informed consent and debriefing
  • Explain how research involving humans or animals is regulated

photograph of the word "research" from a dictionary with a pen pointing at the word.

Scientists are engaged in explaining and understanding how the world around them works, and they are able to do so by coming up with theories that generate hypotheses that are testable and falsifiable. Theories that stand up to their tests are retained and refined, while those that do not are discarded or modified. In this way, research enables scientists to separate fact from simple opinion. Having good information generated from research aids in making wise decisions both in public policy and in our personal lives. In this section, you’ll see how psychologists use the scientific method to study and understand behavior.

The Scientific Process

A skull has a large hole bored through the forehead.

The goal of all scientists is to better understand the world around them. Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior. In contrast to other methods that people use to understand the behavior of others, such as intuition and personal experience, the hallmark of scientific research is that there is evidence to support a claim. Scientific knowledge is empirical : It is grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing.

While behavior is observable, the mind is not. If someone is crying, we can see the behavior. However, the reason for the behavior is more difficult to determine. Is the person crying due to being sad, in pain, or happy? Sometimes we can learn the reason for someone’s behavior by simply asking a question, like “Why are you crying?” However, there are situations in which an individual is either uncomfortable or unwilling to answer the question honestly, or is incapable of answering. For example, infants would not be able to explain why they are crying. In such circumstances, the psychologist must be creative in finding ways to better understand behavior. This module explores how scientific knowledge is generated, and how important that knowledge is in forming decisions in our personal lives and in the public domain.

Process of Scientific Research

Flowchart of the scientific method. It begins with make an observation, then ask a question, form a hypothesis that answers the question, make a prediction based on the hypothesis, do an experiment to test the prediction, analyze the results, prove the hypothesis correct or incorrect, then report the results.

Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on.

The basic steps in the scientific method are:

  • Observe a natural phenomenon and define a question about it
  • Make a hypothesis, or potential solution to the question
  • Test the hypothesis
  • If the hypothesis is true, find more evidence or find counter-evidence
  • If the hypothesis is false, create a new hypothesis or try again
  • Draw conclusions and repeat–the scientific method is never-ending, and no result is ever considered perfect

In order to ask an important question that may improve our understanding of the world, a researcher must first observe natural phenomena. By making observations, a researcher can define a useful question. After finding a question to answer, the researcher can then make a prediction (a hypothesis) about what he or she thinks the answer will be. This prediction is usually a statement about the relationship between two or more variables. After making a hypothesis, the researcher will then design an experiment to test his or her hypothesis and evaluate the data gathered. These data will either support or refute the hypothesis. Based on the conclusions drawn from the data, the researcher will then find more evidence to support the hypothesis, look for counter-evidence to further strengthen the hypothesis, revise the hypothesis and create a new experiment, or continue to incorporate the information gathered to answer the research question.

Basic Principles of the Scientific Method

Two key concepts in the scientific approach are theory and hypothesis. A theory is a well-developed set of ideas that propose an explanation for observed phenomena that can be used to make predictions about future observations. A hypothesis is a testable prediction that is arrived at logically from a theory. It is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. As specific hypotheses are tested, theories are modified and refined to reflect and incorporate the result of these tests.

A diagram has four boxes: the top is labeled “theory,” the right is labeled “hypothesis,” the bottom is labeled “research,” and the left is labeled “observation.” Arrows flow in the direction from top to right to bottom to left and back to the top, clockwise. The top right arrow is labeled “use the hypothesis to form a theory,” the bottom right arrow is labeled “design a study to test the hypothesis,” the bottom left arrow is labeled “perform the research,” and the top left arrow is labeled “create or modify the theory.”

Other key components in following the scientific method include verifiability, predictability, falsifiability, and fairness. Verifiability means that an experiment must be replicable by another researcher. To achieve verifiability, researchers must make sure to document their methods and clearly explain how their experiment is structured and why it produces certain results.

Predictability in a scientific theory implies that the theory should enable us to make predictions about future events. The precision of these predictions is a measure of the strength of the theory.

Falsifiability refers to whether a hypothesis can be disproved. For a hypothesis to be falsifiable, it must be logically possible to make an observation or do a physical experiment that would show that there is no support for the hypothesis. Even when a hypothesis cannot be shown to be false, that does not necessarily mean it is not valid. Future testing may disprove the hypothesis. This does not mean that a hypothesis has to be shown to be false, just that it can be tested.

To determine whether a hypothesis is supported or not supported, psychological researchers must conduct hypothesis testing using statistics. Hypothesis testing is a type of statistics that determines the probability of a hypothesis being true or false. If hypothesis testing reveals that results were “statistically significant,” this means that there was support for the hypothesis and that the researchers can be reasonably confident that their result was not due to random chance. If the results are not statistically significant, this means that the researchers’ hypothesis was not supported.

Fairness implies that all data must be considered when evaluating a hypothesis. A researcher cannot pick and choose what data to keep and what to discard or focus specifically on data that support or do not support a particular hypothesis. All data must be accounted for, even if they invalidate the hypothesis.

Applying the Scientific Method

To see how this process works, let’s consider a specific theory and a hypothesis that might be generated from that theory. As you’ll learn in a later module, the James-Lange theory of emotion asserts that emotional experience relies on the physiological arousal associated with the emotional state. If you walked out of your home and discovered a very aggressive snake waiting on your doorstep, your heart would begin to race and your stomach churn. According to the James-Lange theory, these physiological changes would result in your feeling of fear. A hypothesis that could be derived from this theory might be that a person who is unaware of the physiological arousal that the sight of the snake elicits will not feel fear.

Remember that a good scientific hypothesis is falsifiable, or capable of being shown to be incorrect. Recall from the introductory module that Sigmund Freud had lots of interesting ideas to explain various human behaviors (Figure 5). However, a major criticism of Freud’s theories is that many of his ideas are not falsifiable; for example, it is impossible to imagine empirical observations that would disprove the existence of the id, the ego, and the superego—the three elements of personality described in Freud’s theories. Despite this, Freud’s theories are widely taught in introductory psychology texts because of their historical significance for personality psychology and psychotherapy, and these remain the root of all modern forms of therapy.

(a)A photograph shows Freud holding a cigar. (b) The mind’s conscious and unconscious states are illustrated as an iceberg floating in water. Beneath the water’s surface in the “unconscious” area are the id, ego, and superego. The area just below the water’s surface is labeled “preconscious.” The area above the water’s surface is labeled “conscious.”

In contrast, the James-Lange theory does generate falsifiable hypotheses, such as the one described above. Some individuals who suffer significant injuries to their spinal columns are unable to feel the bodily changes that often accompany emotional experiences. Therefore, we could test the hypothesis by determining how emotional experiences differ between individuals who have the ability to detect these changes in their physiological arousal and those who do not. In fact, this research has been conducted and while the emotional experiences of people deprived of an awareness of their physiological arousal may be less intense, they still experience emotion (Chwalisz, Diener, & Gallagher, 1988).

Link to Learning

Why the scientific method is important for psychology.

The use of the scientific method is one of the main features that separates modern psychology from earlier philosophical inquiries about the mind. Compared to chemistry, physics, and other “natural sciences,” psychology has long been considered one of the “social sciences” because of the subjective nature of the things it seeks to study. Many of the concepts that psychologists are interested in—such as aspects of the human mind, behavior, and emotions—are subjective and cannot be directly measured. Psychologists often rely instead on behavioral observations and self-reported data, which are considered by some to be illegitimate or lacking in methodological rigor. Applying the scientific method to psychology, therefore, helps to standardize the approach to understanding its very different types of information.

The scientific method allows psychological data to be replicated and confirmed in many instances, under different circumstances, and by a variety of researchers. Through replication of experiments, new generations of psychologists can reduce errors and broaden the applicability of theories. It also allows theories to be tested and validated instead of simply being conjectures that could never be verified or falsified. All of this allows psychologists to gain a stronger understanding of how the human mind works.

Scientific articles published in journals and psychology papers written in the style of the American Psychological Association (i.e., in “APA style”) are structured around the scientific method. These papers include an Introduction, which introduces the background information and outlines the hypotheses; a Methods section, which outlines the specifics of how the experiment was conducted to test the hypothesis; a Results section, which includes the statistics that tested the hypothesis and state whether it was supported or not supported, and a Discussion and Conclusion, which state the implications of finding support for, or no support for, the hypothesis. Writing articles and papers that adhere to the scientific method makes it easy for future researchers to repeat the study and attempt to replicate the results.

Ethics in Research

Today, scientists agree that good research is ethical in nature and is guided by a basic respect for human dignity and safety. However, as you will read in the Tuskegee Syphilis Study, this has not always been the case. Modern researchers must demonstrate that the research they perform is ethically sound. This section presents how ethical considerations affect the design and implementation of research conducted today.

Research Involving Human Participants

Any experiment involving the participation of human subjects is governed by extensive, strict guidelines designed to ensure that the experiment does not result in harm. Any research institution that receives federal support for research involving human participants must have access to an institutional review board (IRB) . The IRB is a committee of individuals often made up of members of the institution’s administration, scientists, and community members (Figure 6). The purpose of the IRB is to review proposals for research that involves human participants. The IRB reviews these proposals with the principles mentioned above in mind, and generally, approval from the IRB is required in order for the experiment to proceed.

A photograph shows a group of people seated around tables in a meeting room.

An institution’s IRB requires several components in any experiment it approves. For one, each participant must sign an informed consent form before they can participate in the experiment. An informed consent  form provides a written description of what participants can expect during the experiment, including potential risks and implications of the research. It also lets participants know that their involvement is completely voluntary and can be discontinued without penalty at any time. Furthermore, the informed consent guarantees that any data collected in the experiment will remain completely confidential. In cases where research participants are under the age of 18, the parents or legal guardians are required to sign the informed consent form.

While the informed consent form should be as honest as possible in describing exactly what participants will be doing, sometimes deception is necessary to prevent participants’ knowledge of the exact research question from affecting the results of the study. Deception involves purposely misleading experiment participants in order to maintain the integrity of the experiment, but not to the point where the deception could be considered harmful. For example, if we are interested in how our opinion of someone is affected by their attire, we might use deception in describing the experiment to prevent that knowledge from affecting participants’ responses. In cases where deception is involved, participants must receive a full debriefing  upon conclusion of the study—complete, honest information about the purpose of the experiment, how the data collected will be used, the reasons why deception was necessary, and information about how to obtain additional information about the study.

Dig Deeper: Ethics and the Tuskegee Syphilis Study

Unfortunately, the ethical guidelines that exist for research today were not always applied in the past. In 1932, poor, rural, black, male sharecroppers from Tuskegee, Alabama, were recruited to participate in an experiment conducted by the U.S. Public Health Service, with the aim of studying syphilis in black men (Figure 7). In exchange for free medical care, meals, and burial insurance, 600 men agreed to participate in the study. A little more than half of the men tested positive for syphilis, and they served as the experimental group (given that the researchers could not randomly assign participants to groups, this represents a quasi-experiment). The remaining syphilis-free individuals served as the control group. However, those individuals that tested positive for syphilis were never informed that they had the disease.

While there was no treatment for syphilis when the study began, by 1947 penicillin was recognized as an effective treatment for the disease. Despite this, no penicillin was administered to the participants in this study, and the participants were not allowed to seek treatment at any other facilities if they continued in the study. Over the course of 40 years, many of the participants unknowingly spread syphilis to their wives (and subsequently their children born from their wives) and eventually died because they never received treatment for the disease. This study was discontinued in 1972 when the experiment was discovered by the national press (Tuskegee University, n.d.). The resulting outrage over the experiment led directly to the National Research Act of 1974 and the strict ethical guidelines for research on humans described in this chapter. Why is this study unethical? How were the men who participated and their families harmed as a function of this research?

A photograph shows a person administering an injection.

Learn more about the Tuskegee Syphilis Study on the CDC website .

Research Involving Animal Subjects

A photograph shows a rat.

This does not mean that animal researchers are immune to ethical concerns. Indeed, the humane and ethical treatment of animal research subjects is a critical aspect of this type of research. Researchers must design their experiments to minimize any pain or distress experienced by animals serving as research subjects.

Whereas IRBs review research proposals that involve human participants, animal experimental proposals are reviewed by an Institutional Animal Care and Use Committee (IACUC) . An IACUC consists of institutional administrators, scientists, veterinarians, and community members. This committee is charged with ensuring that all experimental proposals require the humane treatment of animal research subjects. It also conducts semi-annual inspections of all animal facilities to ensure that the research protocols are being followed. No animal research project can proceed without the committee’s approval.

Introduction to Approaches to Research

  • Differentiate between descriptive, correlational, and experimental research
  • Explain the strengths and weaknesses of case studies, naturalistic observation, and surveys
  • Describe the strength and weaknesses of archival research
  • Compare longitudinal and cross-sectional approaches to research
  • Explain what a correlation coefficient tells us about the relationship between variables
  • Describe why correlation does not mean causation
  • Describe the experimental process, including ways to control for bias
  • Identify and differentiate between independent and dependent variables

Three researchers review data while talking around a microscope.

Psychologists use descriptive, experimental, and correlational methods to conduct research. Descriptive, or qualitative, methods include the case study, naturalistic observation, surveys, archival research, longitudinal research, and cross-sectional research.

Experiments are conducted in order to determine cause-and-effect relationships. In ideal experimental design, the only difference between the experimental and control groups is whether participants are exposed to the experimental manipulation. Each group goes through all phases of the experiment, but each group will experience a different level of the independent variable: the experimental group is exposed to the experimental manipulation, and the control group is not exposed to the experimental manipulation. The researcher then measures the changes that are produced in the dependent variable in each group. Once data is collected from both groups, it is analyzed statistically to determine if there are meaningful differences between the groups.

When scientists passively observe and measure phenomena it is called correlational research. Here, psychologists do not intervene and change behavior, as they do in experiments. In correlational research, they identify patterns of relationships, but usually cannot infer what causes what. Importantly, with correlational research, you can examine only two variables at a time, no more and no less.

Watch It: More on Research

If you enjoy learning through lectures and want an interesting and comprehensive summary of this section, then click on the Youtube link to watch a lecture given by MIT Professor John Gabrieli . Start at the 30:45 minute mark  and watch through the end to hear examples of actual psychological studies and how they were analyzed. Listen for references to independent and dependent variables, experimenter bias, and double-blind studies. In the lecture, you’ll learn about breaking social norms, “WEIRD” research, why expectations matter, how a warm cup of coffee might make you nicer, why you should change your answer on a multiple choice test, and why praise for intelligence won’t make you any smarter.

You can view the transcript for “Lec 2 | MIT 9.00SC Introduction to Psychology, Spring 2011” here (opens in new window) .

Descriptive Research

There are many research methods available to psychologists in their efforts to understand, describe, and explain behavior and the cognitive and biological processes that underlie it. Some methods rely on observational techniques. Other approaches involve interactions between the researcher and the individuals who are being studied—ranging from a series of simple questions to extensive, in-depth interviews—to well-controlled experiments.

The three main categories of psychological research are descriptive, correlational, and experimental research. Research studies that do not test specific relationships between variables are called descriptive, or qualitative, studies . These studies are used to describe general or specific behaviors and attributes that are observed and measured. In the early stages of research it might be difficult to form a hypothesis, especially when there is not any existing literature in the area. In these situations designing an experiment would be premature, as the question of interest is not yet clearly defined as a hypothesis. Often a researcher will begin with a non-experimental approach, such as a descriptive study, to gather more information about the topic before designing an experiment or correlational study to address a specific hypothesis. Descriptive research is distinct from correlational research , in which psychologists formally test whether a relationship exists between two or more variables. Experimental research  goes a step further beyond descriptive and correlational research and randomly assigns people to different conditions, using hypothesis testing to make inferences about how these conditions affect behavior. It aims to determine if one variable directly impacts and causes another. Correlational and experimental research both typically use hypothesis testing, whereas descriptive research does not.

Each of these research methods has unique strengths and weaknesses, and each method may only be appropriate for certain types of research questions. For example, studies that rely primarily on observation produce incredible amounts of information, but the ability to apply this information to the larger population is somewhat limited because of small sample sizes. Survey research, on the other hand, allows researchers to easily collect data from relatively large samples. While this allows for results to be generalized to the larger population more easily, the information that can be collected on any given survey is somewhat limited and subject to problems associated with any type of self-reported data. Some researchers conduct archival research by using existing records. While this can be a fairly inexpensive way to collect data that can provide insight into a number of research questions, researchers using this approach have no control on how or what kind of data was collected.

Correlational research can find a relationship between two variables, but the only way a researcher can claim that the relationship between the variables is cause and effect is to perform an experiment. In experimental research, which will be discussed later in the text, there is a tremendous amount of control over variables of interest. While this is a powerful approach, experiments are often conducted in very artificial settings. This calls into question the validity of experimental findings with regard to how they would apply in real-world settings. In addition, many of the questions that psychologists would like to answer cannot be pursued through experimental research because of ethical concerns.

The three main types of descriptive studies are, naturalistic observation, case studies, and surveys.

Naturalistic Observation

If you want to understand how behavior occurs, one of the best ways to gain information is to simply observe the behavior in its natural context. However, people might change their behavior in unexpected ways if they know they are being observed. How do researchers obtain accurate information when people tend to hide their natural behavior? As an example, imagine that your professor asks everyone in your class to raise their hand if they always wash their hands after using the restroom. Chances are that almost everyone in the classroom will raise their hand, but do you think hand washing after every trip to the restroom is really that universal?

This is very similar to the phenomenon mentioned earlier in this module: many individuals do not feel comfortable answering a question honestly. But if we are committed to finding out the facts about hand washing, we have other options available to us.

Suppose we send a classmate into the restroom to actually watch whether everyone washes their hands after using the restroom. Will our observer blend into the restroom environment by wearing a white lab coat, sitting with a clipboard, and staring at the sinks? We want our researcher to be inconspicuous—perhaps standing at one of the sinks pretending to put in contact lenses while secretly recording the relevant information. This type of observational study is called naturalistic observation : observing behavior in its natural setting. To better understand peer exclusion, Suzanne Fanger collaborated with colleagues at the University of Texas to observe the behavior of preschool children on a playground. How did the observers remain inconspicuous over the duration of the study? They equipped a few of the children with wireless microphones (which the children quickly forgot about) and observed while taking notes from a distance. Also, the children in that particular preschool (a “laboratory preschool”) were accustomed to having observers on the playground (Fanger, Frankel, & Hazen, 2012).

A photograph shows two police cars driving, one with its lights flashing.

It is critical that the observer be as unobtrusive and as inconspicuous as possible: when people know they are being watched, they are less likely to behave naturally. If you have any doubt about this, ask yourself how your driving behavior might differ in two situations: In the first situation, you are driving down a deserted highway during the middle of the day; in the second situation, you are being followed by a police car down the same deserted highway (Figure 9).

It should be pointed out that naturalistic observation is not limited to research involving humans. Indeed, some of the best-known examples of naturalistic observation involve researchers going into the field to observe various kinds of animals in their own environments. As with human studies, the researchers maintain their distance and avoid interfering with the animal subjects so as not to influence their natural behaviors. Scientists have used this technique to study social hierarchies and interactions among animals ranging from ground squirrels to gorillas. The information provided by these studies is invaluable in understanding how those animals organize socially and communicate with one another. The anthropologist Jane Goodall, for example, spent nearly five decades observing the behavior of chimpanzees in Africa (Figure 10). As an illustration of the types of concerns that a researcher might encounter in naturalistic observation, some scientists criticized Goodall for giving the chimps names instead of referring to them by numbers—using names was thought to undermine the emotional detachment required for the objectivity of the study (McKie, 2010).

(a) A photograph shows Jane Goodall speaking from a lectern. (b) A photograph shows a chimpanzee’s face.

The greatest benefit of naturalistic observation is the validity, or accuracy, of information collected unobtrusively in a natural setting. Having individuals behave as they normally would in a given situation means that we have a higher degree of ecological validity, or realism, than we might achieve with other research approaches. Therefore, our ability to generalize  the findings of the research to real-world situations is enhanced. If done correctly, we need not worry about people or animals modifying their behavior simply because they are being observed. Sometimes, people may assume that reality programs give us a glimpse into authentic human behavior. However, the principle of inconspicuous observation is violated as reality stars are followed by camera crews and are interviewed on camera for personal confessionals. Given that environment, we must doubt how natural and realistic their behaviors are.

The major downside of naturalistic observation is that they are often difficult to set up and control. In our restroom study, what if you stood in the restroom all day prepared to record people’s hand washing behavior and no one came in? Or, what if you have been closely observing a troop of gorillas for weeks only to find that they migrated to a new place while you were sleeping in your tent? The benefit of realistic data comes at a cost. As a researcher you have no control of when (or if) you have behavior to observe. In addition, this type of observational research often requires significant investments of time, money, and a good dose of luck.

Sometimes studies involve structured observation. In these cases, people are observed while engaging in set, specific tasks. An excellent example of structured observation comes from Strange Situation by Mary Ainsworth (you will read more about this in the module on lifespan development). The Strange Situation is a procedure used to evaluate attachment styles that exist between an infant and caregiver. In this scenario, caregivers bring their infants into a room filled with toys. The Strange Situation involves a number of phases, including a stranger coming into the room, the caregiver leaving the room, and the caregiver’s return to the room. The infant’s behavior is closely monitored at each phase, but it is the behavior of the infant upon being reunited with the caregiver that is most telling in terms of characterizing the infant’s attachment style with the caregiver.

Another potential problem in observational research is observer bias . Generally, people who act as observers are closely involved in the research project and may unconsciously skew their observations to fit their research goals or expectations. To protect against this type of bias, researchers should have clear criteria established for the types of behaviors recorded and how those behaviors should be classified. In addition, researchers often compare observations of the same event by multiple observers, in order to test inter-rater reliability : a measure of reliability that assesses the consistency of observations by different observers.

Case Studies

In 2011, the New York Times published a feature story on Krista and Tatiana Hogan, Canadian twin girls. These particular twins are unique because Krista and Tatiana are conjoined twins, connected at the head. There is evidence that the two girls are connected in a part of the brain called the thalamus, which is a major sensory relay center. Most incoming sensory information is sent through the thalamus before reaching higher regions of the cerebral cortex for processing.

The implications of this potential connection mean that it might be possible for one twin to experience the sensations of the other twin. For instance, if Krista is watching a particularly funny television program, Tatiana might smile or laugh even if she is not watching the program. This particular possibility has piqued the interest of many neuroscientists who seek to understand how the brain uses sensory information.

These twins represent an enormous resource in the study of the brain, and since their condition is very rare, it is likely that as long as their family agrees, scientists will follow these girls very closely throughout their lives to gain as much information as possible (Dominus, 2011).

In observational research, scientists are conducting a clinical or case study when they focus on one person or just a few individuals. Indeed, some scientists spend their entire careers studying just 10–20 individuals. Why would they do this? Obviously, when they focus their attention on a very small number of people, they can gain a tremendous amount of insight into those cases. The richness of information that is collected in clinical or case studies is unmatched by any other single research method. This allows the researcher to have a very deep understanding of the individuals and the particular phenomenon being studied.

If clinical or case studies provide so much information, why are they not more frequent among researchers? As it turns out, the major benefit of this particular approach is also a weakness. As mentioned earlier, this approach is often used when studying individuals who are interesting to researchers because they have a rare characteristic. Therefore, the individuals who serve as the focus of case studies are not like most other people. If scientists ultimately want to explain all behavior, focusing attention on such a special group of people can make it difficult to generalize any observations to the larger population as a whole. Generalizing refers to the ability to apply the findings of a particular research project to larger segments of society. Again, case studies provide enormous amounts of information, but since the cases are so specific, the potential to apply what’s learned to the average person may be very limited.

Often, psychologists develop surveys as a means of gathering data. Surveys are lists of questions to be answered by research participants, and can be delivered as paper-and-pencil questionnaires, administered electronically, or conducted verbally (Figure 11). Generally, the survey itself can be completed in a short time, and the ease of administering a survey makes it easy to collect data from a large number of people.

Surveys allow researchers to gather data from larger samples than may be afforded by other research methods . A sample is a subset of individuals selected from a population , which is the overall group of individuals that the researchers are interested in. Researchers study the sample and seek to generalize their findings to the population.

A sample online survey reads, “Dear visitor, your opinion is important to us. We would like to invite you to participate in a short survey to gather your opinions and feedback on your news consumption habits. The survey will take approximately 10-15 minutes. Simply click the “Yes” button below to launch the survey. Would you like to participate?” Two buttons are labeled “yes” and “no.”

There is both strength and weakness of the survey in comparison to case studies. By using surveys, we can collect information from a larger sample of people. A larger sample is better able to reflect the actual diversity of the population, thus allowing better generalizability. Therefore, if our sample is sufficiently large and diverse, we can assume that the data we collect from the survey can be generalized to the larger population with more certainty than the information collected through a case study. However, given the greater number of people involved, we are not able to collect the same depth of information on each person that would be collected in a case study.

Another potential weakness of surveys is something we touched on earlier in this chapter: people don’t always give accurate responses. They may lie, misremember, or answer questions in a way that they think makes them look good. For example, people may report drinking less alcohol than is actually the case.

Any number of research questions can be answered through the use of surveys. One real-world example is the research conducted by Jenkins, Ruppel, Kizer, Yehl, and Griffin (2012) about the backlash against the US Arab-American community following the terrorist attacks of September 11, 2001. Jenkins and colleagues wanted to determine to what extent these negative attitudes toward Arab-Americans still existed nearly a decade after the attacks occurred. In one study, 140 research participants filled out a survey with 10 questions, including questions asking directly about the participant’s overt prejudicial attitudes toward people of various ethnicities. The survey also asked indirect questions about how likely the participant would be to interact with a person of a given ethnicity in a variety of settings (such as, “How likely do you think it is that you would introduce yourself to a person of Arab-American descent?”). The results of the research suggested that participants were unwilling to report prejudicial attitudes toward any ethnic group. However, there were significant differences between their pattern of responses to questions about social interaction with Arab-Americans compared to other ethnic groups: they indicated less willingness for social interaction with Arab-Americans compared to the other ethnic groups. This suggested that the participants harbored subtle forms of prejudice against Arab-Americans, despite their assertions that this was not the case (Jenkins et al., 2012).

Think It Over

Archival research.

(a) A photograph shows stacks of paper files on shelves. (b) A photograph shows a computer.

In comparing archival research to other research methods, there are several important distinctions. For one, the researcher employing archival research never directly interacts with research participants. Therefore, the investment of time and money to collect data is considerably less with archival research. Additionally, researchers have no control over what information was originally collected. Therefore, research questions have to be tailored so they can be answered within the structure of the existing data sets. There is also no guarantee of consistency between the records from one source to another, which might make comparing and contrasting different data sets problematic.

Longitudinal and Cross-Sectional Research

Sometimes we want to see how people change over time, as in studies of human development and lifespan. When we test the same group of individuals repeatedly over an extended period of time, we are conducting longitudinal research. Longitudinal research  is a research design in which data-gathering is administered repeatedly over an extended period of time. For example, we may survey a group of individuals about their dietary habits at age 20, retest them a decade later at age 30, and then again at age 40.

Another approach is cross-sectional research . In cross-sectional research, a researcher compares multiple segments of the population at the same time. Using the dietary habits example above, the researcher might directly compare different groups of people by age. Instead of observing a group of people for 20 years to see how their dietary habits changed from decade to decade, the researcher would study a group of 20-year-old individuals and compare them to a group of 30-year-old individuals and a group of 40-year-old individuals. While cross-sectional research requires a shorter-term investment, it is also limited by differences that exist between the different generations (or cohorts) that have nothing to do with age per se, but rather reflect the social and cultural experiences of different generations of individuals make them different from one another.

To illustrate this concept, consider the following survey findings. In recent years there has been significant growth in the popular support of same-sex marriage. Many studies on this topic break down survey participants into different age groups. In general, younger people are more supportive of same-sex marriage than are those who are older (Jones, 2013). Does this mean that as we age we become less open to the idea of same-sex marriage, or does this mean that older individuals have different perspectives because of the social climates in which they grew up? Longitudinal research is a powerful approach because the same individuals are involved in the research project over time, which means that the researchers need to be less concerned with differences among cohorts affecting the results of their study.

Often longitudinal studies are employed when researching various diseases in an effort to understand particular risk factors. Such studies often involve tens of thousands of individuals who are followed for several decades. Given the enormous number of people involved in these studies, researchers can feel confident that their findings can be generalized to the larger population. The Cancer Prevention Study-3 (CPS-3) is one of a series of longitudinal studies sponsored by the American Cancer Society aimed at determining predictive risk factors associated with cancer. When participants enter the study, they complete a survey about their lives and family histories, providing information on factors that might cause or prevent the development of cancer. Then every few years the participants receive additional surveys to complete. In the end, hundreds of thousands of participants will be tracked over 20 years to determine which of them develop cancer and which do not.

Clearly, this type of research is important and potentially very informative. For instance, earlier longitudinal studies sponsored by the American Cancer Society provided some of the first scientific demonstrations of the now well-established links between increased rates of cancer and smoking (American Cancer Society, n.d.) (Figure 13).

A photograph shows pack of cigarettes and cigarettes in an ashtray. The pack of cigarettes reads, “Surgeon general’s warning: smoking causes lung cancer, heart disease, emphysema, and may complicate pregnancy.”

As with any research strategy, longitudinal research is not without limitations. For one, these studies require an incredible time investment by the researcher and research participants. Given that some longitudinal studies take years, if not decades, to complete, the results will not be known for a considerable period of time. In addition to the time demands, these studies also require a substantial financial investment. Many researchers are unable to commit the resources necessary to see a longitudinal project through to the end.

Research participants must also be willing to continue their participation for an extended period of time, and this can be problematic. People move, get married and take new names, get ill, and eventually die. Even without significant life changes, some people may simply choose to discontinue their participation in the project. As a result, the attrition  rates, or reduction in the number of research participants due to dropouts, in longitudinal studies are quite high and increases over the course of a project. For this reason, researchers using this approach typically recruit many participants fully expecting that a substantial number will drop out before the end. As the study progresses, they continually check whether the sample still represents the larger population, and make adjustments as necessary.

Correlational Research

Did you know that as sales in ice cream increase, so does the overall rate of crime? Is it possible that indulging in your favorite flavor of ice cream could send you on a crime spree? Or, after committing crime do you think you might decide to treat yourself to a cone? There is no question that a relationship exists between ice cream and crime (e.g., Harper, 2013), but it would be pretty foolish to decide that one thing actually caused the other to occur.

It is much more likely that both ice cream sales and crime rates are related to the temperature outside. When the temperature is warm, there are lots of people out of their houses, interacting with each other, getting annoyed with one another, and sometimes committing crimes. Also, when it is warm outside, we are more likely to seek a cool treat like ice cream. How do we determine if there is indeed a relationship between two things? And when there is a relationship, how can we discern whether it is attributable to coincidence or causation?

Three scatterplots are shown. Scatterplot (a) is labeled “positive correlation” and shows scattered dots forming a rough line from the bottom left to the top right; the x-axis is labeled “weight” and the y-axis is labeled “height.” Scatterplot (b) is labeled “negative correlation” and shows scattered dots forming a rough line from the top left to the bottom right; the x-axis is labeled “tiredness” and the y-axis is labeled “hours of sleep.” Scatterplot (c) is labeled “no correlation” and shows scattered dots having no pattern; the x-axis is labeled “shoe size” and the y-axis is labeled “hours of sleep.”

Correlation Does Not Indicate Causation

Correlational research is useful because it allows us to discover the strength and direction of relationships that exist between two variables. However, correlation is limited because establishing the existence of a relationship tells us little about cause and effect . While variables are sometimes correlated because one does cause the other, it could also be that some other factor, a confounding variable , is actually causing the systematic movement in our variables of interest. In the ice cream/crime rate example mentioned earlier, temperature is a confounding variable that could account for the relationship between the two variables.

Even when we cannot point to clear confounding variables, we should not assume that a correlation between two variables implies that one variable causes changes in another. This can be frustrating when a cause-and-effect relationship seems clear and intuitive. Think back to our discussion of the research done by the American Cancer Society and how their research projects were some of the first demonstrations of the link between smoking and cancer. It seems reasonable to assume that smoking causes cancer, but if we were limited to correlational research , we would be overstepping our bounds by making this assumption.

A photograph shows a bowl of cereal.

Unfortunately, people mistakenly make claims of causation as a function of correlations all the time. Such claims are especially common in advertisements and news stories. For example, recent research found that people who eat cereal on a regular basis achieve healthier weights than those who rarely eat cereal (Frantzen, Treviño, Echon, Garcia-Dominic, & DiMarco, 2013; Barton et al., 2005). Guess how the cereal companies report this finding. Does eating cereal really cause an individual to maintain a healthy weight, or are there other possible explanations, such as, someone at a healthy weight is more likely to regularly eat a healthy breakfast than someone who is obese or someone who avoids meals in an attempt to diet (Figure 15)? While correlational research is invaluable in identifying relationships among variables, a major limitation is the inability to establish causality. Psychologists want to make statements about cause and effect, but the only way to do that is to conduct an experiment to answer a research question. The next section describes how scientific experiments incorporate methods that eliminate, or control for, alternative explanations, which allow researchers to explore how changes in one variable cause changes in another variable.

Watch this clip from Freakonomics for an example of how correlation does  not  indicate causation.

You can view the transcript for “Correlation vs. Causality: Freakonomics Movie” here (opens in new window) .

Illusory Correlations

The temptation to make erroneous cause-and-effect statements based on correlational research is not the only way we tend to misinterpret data. We also tend to make the mistake of illusory correlations, especially with unsystematic observations. Illusory correlations , or false correlations, occur when people believe that relationships exist between two things when no such relationship exists. One well-known illusory correlation is the supposed effect that the moon’s phases have on human behavior. Many people passionately assert that human behavior is affected by the phase of the moon, and specifically, that people act strangely when the moon is full (Figure 16).

A photograph shows the moon.

There is no denying that the moon exerts a powerful influence on our planet. The ebb and flow of the ocean’s tides are tightly tied to the gravitational forces of the moon. Many people believe, therefore, that it is logical that we are affected by the moon as well. After all, our bodies are largely made up of water. A meta-analysis of nearly 40 studies consistently demonstrated, however, that the relationship between the moon and our behavior does not exist (Rotton & Kelly, 1985). While we may pay more attention to odd behavior during the full phase of the moon, the rates of odd behavior remain constant throughout the lunar cycle.

Why are we so apt to believe in illusory correlations like this? Often we read or hear about them and simply accept the information as valid. Or, we have a hunch about how something works and then look for evidence to support that hunch, ignoring evidence that would tell us our hunch is false; this is known as confirmation bias . Other times, we find illusory correlations based on the information that comes most easily to mind, even if that information is severely limited. And while we may feel confident that we can use these relationships to better understand and predict the world around us, illusory correlations can have significant drawbacks. For example, research suggests that illusory correlations—in which certain behaviors are inaccurately attributed to certain groups—are involved in the formation of prejudicial attitudes that can ultimately lead to discriminatory behavior (Fiedler, 2004).

We all have a tendency to make illusory correlations from time to time. Try to think of an illusory correlation that is held by you, a family member, or a close friend. How do you think this illusory correlation came about and what can be done in the future to combat them?

Experiments

Causality: conducting experiments and using the data, experimental hypothesis.

In order to conduct an experiment, a researcher must have a specific hypothesis to be tested. As you’ve learned, hypotheses can be formulated either through direct observation of the real world or after careful review of previous research. For example, if you think that children should not be allowed to watch violent programming on television because doing so would cause them to behave more violently, then you have basically formulated a hypothesis—namely, that watching violent television programs causes children to behave more violently. How might you have arrived at this particular hypothesis? You may have younger relatives who watch cartoons featuring characters using martial arts to save the world from evildoers, with an impressive array of punching, kicking, and defensive postures. You notice that after watching these programs for a while, your young relatives mimic the fighting behavior of the characters portrayed in the cartoon (Figure 17).

A photograph shows a child pointing a toy gun.

These sorts of personal observations are what often lead us to formulate a specific hypothesis, but we cannot use limited personal observations and anecdotal evidence to rigorously test our hypothesis. Instead, to find out if real-world data supports our hypothesis, we have to conduct an experiment.

Designing an Experiment

The most basic experimental design involves two groups: the experimental group and the control group. The two groups are designed to be the same except for one difference— experimental manipulation. The experimental group  gets the experimental manipulation—that is, the treatment or variable being tested (in this case, violent TV images)—and the control group does not. Since experimental manipulation is the only difference between the experimental and control groups, we can be sure that any differences between the two are due to experimental manipulation rather than chance.

In our example of how violent television programming might affect violent behavior in children, we have the experimental group view violent television programming for a specified time and then measure their violent behavior. We measure the violent behavior in our control group after they watch nonviolent television programming for the same amount of time. It is important for the control group to be treated similarly to the experimental group, with the exception that the control group does not receive the experimental manipulation. Therefore, we have the control group watch non-violent television programming for the same amount of time as the experimental group.

We also need to precisely define, or operationalize, what is considered violent and nonviolent. An operational definition is a description of how we will measure our variables, and it is important in allowing others understand exactly how and what a researcher measures in a particular experiment. In operationalizing violent behavior, we might choose to count only physical acts like kicking or punching as instances of this behavior, or we also may choose to include angry verbal exchanges. Whatever we determine, it is important that we operationalize violent behavior in such a way that anyone who hears about our study for the first time knows exactly what we mean by violence. This aids peoples’ ability to interpret our data as well as their capacity to repeat our experiment should they choose to do so.

Once we have operationalized what is considered violent television programming and what is considered violent behavior from our experiment participants, we need to establish how we will run our experiment. In this case, we might have participants watch a 30-minute television program (either violent or nonviolent, depending on their group membership) before sending them out to a playground for an hour where their behavior is observed and the number and type of violent acts is recorded.

Ideally, the people who observe and record the children’s behavior are unaware of who was assigned to the experimental or control group, in order to control for experimenter bias. Experimenter bias refers to the possibility that a researcher’s expectations might skew the results of the study. Remember, conducting an experiment requires a lot of planning, and the people involved in the research project have a vested interest in supporting their hypotheses. If the observers knew which child was in which group, it might influence how much attention they paid to each child’s behavior as well as how they interpreted that behavior. By being blind to which child is in which group, we protect against those biases. This situation is a single-blind study , meaning that one of the groups (participants) are unaware as to which group they are in (experiment or control group) while the researcher who developed the experiment knows which participants are in each group.

A photograph shows three glass bottles of pills labeled as placebos.

In a double-blind study , both the researchers and the participants are blind to group assignments. Why would a researcher want to run a study where no one knows who is in which group? Because by doing so, we can control for both experimenter and participant expectations. If you are familiar with the phrase placebo effect, you already have some idea as to why this is an important consideration. The placebo effect occurs when people’s expectations or beliefs influence or determine their experience in a given situation. In other words, simply expecting something to happen can actually make it happen.

The placebo effect is commonly described in terms of testing the effectiveness of a new medication. Imagine that you work in a pharmaceutical company, and you think you have a new drug that is effective in treating depression. To demonstrate that your medication is effective, you run an experiment with two groups: The experimental group receives the medication, and the control group does not. But you don’t want participants to know whether they received the drug or not.

Why is that? Imagine that you are a participant in this study, and you have just taken a pill that you think will improve your mood. Because you expect the pill to have an effect, you might feel better simply because you took the pill and not because of any drug actually contained in the pill—this is the placebo effect.

To make sure that any effects on mood are due to the drug and not due to expectations, the control group receives a placebo (in this case a sugar pill). Now everyone gets a pill, and once again neither the researcher nor the experimental participants know who got the drug and who got the sugar pill. Any differences in mood between the experimental and control groups can now be attributed to the drug itself rather than to experimenter bias or participant expectations (Figure 18).

Independent and Dependent Variables

In a research experiment, we strive to study whether changes in one thing cause changes in another. To achieve this, we must pay attention to two important variables, or things that can be changed, in any experimental study: the independent variable and the dependent variable. An independent variable is manipulated or controlled by the experimenter. In a well-designed experimental study, the independent variable is the only important difference between the experimental and control groups. In our example of how violent television programs affect children’s display of violent behavior, the independent variable is the type of program—violent or nonviolent—viewed by participants in the study (Figure 19). A dependent variable is what the researcher measures to see how much effect the independent variable had. In our example, the dependent variable is the number of violent acts displayed by the experimental participants.

A box labeled “independent variable: type of television programming viewed” contains a photograph of a person shooting an automatic weapon. An arrow labeled “influences change in the…” leads to a second box. The second box is labeled “dependent variable: violent behavior displayed” and has a photograph of a child pointing a toy gun.

We expect that the dependent variable will change as a function of the independent variable. In other words, the dependent variable depends on the independent variable. A good way to think about the relationship between the independent and dependent variables is with this question: What effect does the independent variable have on the dependent variable? Returning to our example, what effect does watching a half hour of violent television programming or nonviolent television programming have on the number of incidents of physical aggression displayed on the playground?

Selecting and Assigning Experimental Participants

Now that our study is designed, we need to obtain a sample of individuals to include in our experiment. Our study involves human participants so we need to determine who to include. Participants  are the subjects of psychological research, and as the name implies, individuals who are involved in psychological research actively participate in the process. Often, psychological research projects rely on college students to serve as participants. In fact, the vast majority of research in psychology subfields has historically involved students as research participants (Sears, 1986; Arnett, 2008). But are college students truly representative of the general population? College students tend to be younger, more educated, more liberal, and less diverse than the general population. Although using students as test subjects is an accepted practice, relying on such a limited pool of research participants can be problematic because it is difficult to generalize findings to the larger population.

Our hypothetical experiment involves children, and we must first generate a sample of child participants. Samples are used because populations are usually too large to reasonably involve every member in our particular experiment (Figure 20). If possible, we should use a random sample   (there are other types of samples, but for the purposes of this section, we will focus on random samples). A random sample is a subset of a larger population in which every member of the population has an equal chance of being selected. Random samples are preferred because if the sample is large enough we can be reasonably sure that the participating individuals are representative of the larger population. This means that the percentages of characteristics in the sample—sex, ethnicity, socioeconomic level, and any other characteristics that might affect the results—are close to those percentages in the larger population.

In our example, let’s say we decide our population of interest is fourth graders. But all fourth graders is a very large population, so we need to be more specific; instead we might say our population of interest is all fourth graders in a particular city. We should include students from various income brackets, family situations, races, ethnicities, religions, and geographic areas of town. With this more manageable population, we can work with the local schools in selecting a random sample of around 200 fourth graders who we want to participate in our experiment.

In summary, because we cannot test all of the fourth graders in a city, we want to find a group of about 200 that reflects the composition of that city. With a representative group, we can generalize our findings to the larger population without fear of our sample being biased in some way.

(a) A photograph shows an aerial view of crowds on a street. (b) A photograph shows s small group of children.

Now that we have a sample, the next step of the experimental process is to split the participants into experimental and control groups through random assignment. With random assignment , all participants have an equal chance of being assigned to either group. There is statistical software that will randomly assign each of the fourth graders in the sample to either the experimental or the control group.

Random assignment is critical for sound experimental design. With sufficiently large samples, random assignment makes it unlikely that there are systematic differences between the groups. So, for instance, it would be very unlikely that we would get one group composed entirely of males, a given ethnic identity, or a given religious ideology. This is important because if the groups were systematically different before the experiment began, we would not know the origin of any differences we find between the groups: Were the differences preexisting, or were they caused by manipulation of the independent variable? Random assignment allows us to assume that any differences observed between experimental and control groups result from the manipulation of the independent variable.

Issues to Consider

While experiments allow scientists to make cause-and-effect claims, they are not without problems. True experiments require the experimenter to manipulate an independent variable, and that can complicate many questions that psychologists might want to address. For instance, imagine that you want to know what effect sex (the independent variable) has on spatial memory (the dependent variable). Although you can certainly look for differences between males and females on a task that taps into spatial memory, you cannot directly control a person’s sex. We categorize this type of research approach as quasi-experimental and recognize that we cannot make cause-and-effect claims in these circumstances.

Experimenters are also limited by ethical constraints. For instance, you would not be able to conduct an experiment designed to determine if experiencing abuse as a child leads to lower levels of self-esteem among adults. To conduct such an experiment, you would need to randomly assign some experimental participants to a group that receives abuse, and that experiment would be unethical.

Introduction to Statistical Thinking

Psychologists use statistics to assist them in analyzing data, and also to give more precise measurements to describe whether something is statistically significant. Analyzing data using statistics enables researchers to find patterns, make claims, and share their results with others. In this section, you’ll learn about some of the tools that psychologists use in statistical analysis.

  • Define reliability and validity
  • Describe the importance of distributional thinking and the role of p-values in statistical inference
  • Describe the role of random sampling and random assignment in drawing cause-and-effect conclusions
  • Describe the basic structure of a psychological research article

Interpreting Experimental Findings

Once data is collected from both the experimental and the control groups, a statistical analysis is conducted to find out if there are meaningful differences between the two groups. A statistical analysis determines how likely any difference found is due to chance (and thus not meaningful). In psychology, group differences are considered meaningful, or significant, if the odds that these differences occurred by chance alone are 5 percent or less. Stated another way, if we repeated this experiment 100 times, we would expect to find the same results at least 95 times out of 100.

The greatest strength of experiments is the ability to assert that any significant differences in the findings are caused by the independent variable. This occurs because random selection, random assignment, and a design that limits the effects of both experimenter bias and participant expectancy should create groups that are similar in composition and treatment. Therefore, any difference between the groups is attributable to the independent variable, and now we can finally make a causal statement. If we find that watching a violent television program results in more violent behavior than watching a nonviolent program, we can safely say that watching violent television programs causes an increase in the display of violent behavior.

Reporting Research

When psychologists complete a research project, they generally want to share their findings with other scientists. The American Psychological Association (APA) publishes a manual detailing how to write a paper for submission to scientific journals. Unlike an article that might be published in a magazine like Psychology Today, which targets a general audience with an interest in psychology, scientific journals generally publish peer-reviewed journal articles aimed at an audience of professionals and scholars who are actively involved in research themselves.

A peer-reviewed journal article is read by several other scientists (generally anonymously) with expertise in the subject matter. These peer reviewers provide feedback—to both the author and the journal editor—regarding the quality of the draft. Peer reviewers look for a strong rationale for the research being described, a clear description of how the research was conducted, and evidence that the research was conducted in an ethical manner. They also look for flaws in the study’s design, methods, and statistical analyses. They check that the conclusions drawn by the authors seem reasonable given the observations made during the research. Peer reviewers also comment on how valuable the research is in advancing the discipline’s knowledge. This helps prevent unnecessary duplication of research findings in the scientific literature and, to some extent, ensures that each research article provides new information. Ultimately, the journal editor will compile all of the peer reviewer feedback and determine whether the article will be published in its current state (a rare occurrence), published with revisions, or not accepted for publication.

Peer review provides some degree of quality control for psychological research. Poorly conceived or executed studies can be weeded out, and even well-designed research can be improved by the revisions suggested. Peer review also ensures that the research is described clearly enough to allow other scientists to replicate it, meaning they can repeat the experiment using different samples to determine reliability. Sometimes replications involve additional measures that expand on the original finding. In any case, each replication serves to provide more evidence to support the original research findings. Successful replications of published research make scientists more apt to adopt those findings, while repeated failures tend to cast doubt on the legitimacy of the original article and lead scientists to look elsewhere. For example, it would be a major advancement in the medical field if a published study indicated that taking a new drug helped individuals achieve a healthy weight without changing their diet. But if other scientists could not replicate the results, the original study’s claims would be questioned.

Dig Deeper: The Vaccine-Autism Myth and the Retraction of Published Studies

Some scientists have claimed that routine childhood vaccines cause some children to develop autism, and, in fact, several peer-reviewed publications published research making these claims. Since the initial reports, large-scale epidemiological research has suggested that vaccinations are not responsible for causing autism and that it is much safer to have your child vaccinated than not. Furthermore, several of the original studies making this claim have since been retracted.

A published piece of work can be rescinded when data is called into question because of falsification, fabrication, or serious research design problems. Once rescinded, the scientific community is informed that there are serious problems with the original publication. Retractions can be initiated by the researcher who led the study, by research collaborators, by the institution that employed the researcher, or by the editorial board of the journal in which the article was originally published. In the vaccine-autism case, the retraction was made because of a significant conflict of interest in which the leading researcher had a financial interest in establishing a link between childhood vaccines and autism (Offit, 2008). Unfortunately, the initial studies received so much media attention that many parents around the world became hesitant to have their children vaccinated (Figure 21). For more information about how the vaccine/autism story unfolded, as well as the repercussions of this story, take a look at Paul Offit’s book, Autism’s False Prophets: Bad Science, Risky Medicine, and the Search for a Cure.

A photograph shows a child being given an oral vaccine.

Reliability and Validity

Dig deeper:  everyday connection: how valid is the sat.

Standardized tests like the SAT are supposed to measure an individual’s aptitude for a college education, but how reliable and valid are such tests? Research conducted by the College Board suggests that scores on the SAT have high predictive validity for first-year college students’ GPA (Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008). In this context, predictive validity refers to the test’s ability to effectively predict the GPA of college freshmen. Given that many institutions of higher education require the SAT for admission, this high degree of predictive validity might be comforting.

However, the emphasis placed on SAT scores in college admissions has generated some controversy on a number of fronts. For one, some researchers assert that the SAT is a biased test that places minority students at a disadvantage and unfairly reduces the likelihood of being admitted into a college (Santelices & Wilson, 2010). Additionally, some research has suggested that the predictive validity of the SAT is grossly exaggerated in how well it is able to predict the GPA of first-year college students. In fact, it has been suggested that the SAT’s predictive validity may be overestimated by as much as 150% (Rothstein, 2004). Many institutions of higher education are beginning to consider de-emphasizing the significance of SAT scores in making admission decisions (Rimer, 2008).

In 2014, College Board president David Coleman expressed his awareness of these problems, recognizing that college success is more accurately predicted by high school grades than by SAT scores. To address these concerns, he has called for significant changes to the SAT exam (Lewin, 2014).

Statistical Significance

Coffee cup with heart shaped cream inside.

Does drinking coffee actually increase your life expectancy? A recent study (Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012) found that men who drank at least six cups of coffee a day also had a 10% lower chance of dying (women’s chances were 15% lower) than those who drank none. Does this mean you should pick up or increase your own coffee habit? We will explore these results in more depth in the next section about drawing conclusions from statistics. Modern society has become awash in studies such as this; you can read about several such studies in the news every day.

Conducting such a study well, and interpreting the results of such studies requires understanding basic ideas of statistics , the science of gaining insight from data. Key components to a statistical investigation are:

  • Planning the study: Start by asking a testable research question and deciding how to collect data. For example, how long was the study period of the coffee study? How many people were recruited for the study, how were they recruited, and from where? How old were they? What other variables were recorded about the individuals? Were changes made to the participants’ coffee habits during the course of the study?
  • Examining the data: What are appropriate ways to examine the data? What graphs are relevant, and what do they reveal? What descriptive statistics can be calculated to summarize relevant aspects of the data, and what do they reveal? What patterns do you see in the data? Are there any individual observations that deviate from the overall pattern, and what do they reveal? For example, in the coffee study, did the proportions differ when we compared the smokers to the non-smokers?
  • Inferring from the data: What are valid statistical methods for drawing inferences “beyond” the data you collected? In the coffee study, is the 10%–15% reduction in risk of death something that could have happened just by chance?
  • Drawing conclusions: Based on what you learned from your data, what conclusions can you draw? Who do you think these conclusions apply to? (Were the people in the coffee study older? Healthy? Living in cities?) Can you draw a cause-and-effect conclusion about your treatments? (Are scientists now saying that the coffee drinking is the cause of the decreased risk of death?)

Notice that the numerical analysis (“crunching numbers” on the computer) comprises only a small part of overall statistical investigation. In this section, you will see how we can answer some of these questions and what questions you should be asking about any statistical investigation you read about.

Distributional Thinking

When data are collected to address a particular question, an important first step is to think of meaningful ways to organize and examine the data. Let’s take a look at an example.

Example 1 : Researchers investigated whether cancer pamphlets are written at an appropriate level to be read and understood by cancer patients (Short, Moriarty, & Cooley, 1995). Tests of reading ability were given to 63 patients. In addition, readability level was determined for a sample of 30 pamphlets, based on characteristics such as the lengths of words and sentences in the pamphlet. The results, reported in terms of grade levels, are displayed in Figure 23.

Table showing patients' reading levels and pahmphlet's reading levels.

  • Data vary . More specifically, values of a variable (such as reading level of a cancer patient or readability level of a cancer pamphlet) vary.
  • Analyzing the pattern of variation, called the distribution of the variable, often reveals insights.

Addressing the research question of whether the cancer pamphlets are written at appropriate levels for the cancer patients requires comparing the two distributions. A naïve comparison might focus only on the centers of the distributions. Both medians turn out to be ninth grade, but considering only medians ignores the variability and the overall distributions of these data. A more illuminating approach is to compare the entire distributions, for example with a graph, as in Figure 24.

Bar graph showing that the reading level of pamphlets is typically higher than the reading level of the patients.

Figure 24 makes clear that the two distributions are not well aligned at all. The most glaring discrepancy is that many patients (17/63, or 27%, to be precise) have a reading level below that of the most readable pamphlet. These patients will need help to understand the information provided in the cancer pamphlets. Notice that this conclusion follows from considering the distributions as a whole, not simply measures of center or variability, and that the graph contrasts those distributions more immediately than the frequency tables.

Finding Significance in Data

Even when we find patterns in data, often there is still uncertainty in various aspects of the data. For example, there may be potential for measurement errors (even your own body temperature can fluctuate by almost 1°F over the course of the day). Or we may only have a “snapshot” of observations from a more long-term process or only a small subset of individuals from the population of interest. In such cases, how can we determine whether patterns we see in our small set of data is convincing evidence of a systematic phenomenon in the larger process or population? Let’s take a look at another example.

Example 2 : In a study reported in the November 2007 issue of Nature , researchers investigated whether pre-verbal infants take into account an individual’s actions toward others in evaluating that individual as appealing or aversive (Hamlin, Wynn, & Bloom, 2007). In one component of the study, 10-month-old infants were shown a “climber” character (a piece of wood with “googly” eyes glued onto it) that could not make it up a hill in two tries. Then the infants were shown two scenarios for the climber’s next try, one where the climber was pushed to the top of the hill by another character (“helper”), and one where the climber was pushed back down the hill by another character (“hinderer”). The infant was alternately shown these two scenarios several times. Then the infant was presented with two pieces of wood (representing the helper and the hinderer characters) and asked to pick one to play with.

The researchers found that of the 16 infants who made a clear choice, 14 chose to play with the helper toy. One possible explanation for this clear majority result is that the helping behavior of the one toy increases the infants’ likelihood of choosing that toy. But are there other possible explanations? What about the color of the toy? Well, prior to collecting the data, the researchers arranged so that each color and shape (red square and blue circle) would be seen by the same number of infants. Or maybe the infants had right-handed tendencies and so picked whichever toy was closer to their right hand?

Well, prior to collecting the data, the researchers arranged it so half the infants saw the helper toy on the right and half on the left. Or, maybe the shapes of these wooden characters (square, triangle, circle) had an effect? Perhaps, but again, the researchers controlled for this by rotating which shape was the helper toy, the hinderer toy, and the climber. When designing experiments, it is important to control for as many variables as might affect the responses as possible. It is beginning to appear that the researchers accounted for all the other plausible explanations. But there is one more important consideration that cannot be controlled—if we did the study again with these 16 infants, they might not make the same choices. In other words, there is some randomness inherent in their selection process.

Maybe each infant had no genuine preference at all, and it was simply “random luck” that led to 14 infants picking the helper toy. Although this random component cannot be controlled, we can apply a probability model to investigate the pattern of results that would occur in the long run if random chance were the only factor.

If the infants were equally likely to pick between the two toys, then each infant had a 50% chance of picking the helper toy. It’s like each infant tossed a coin, and if it landed heads, the infant picked the helper toy. So if we tossed a coin 16 times, could it land heads 14 times? Sure, it’s possible, but it turns out to be very unlikely. Getting 14 (or more) heads in 16 tosses is about as likely as tossing a coin and getting 9 heads in a row. This probability is referred to as a p-value . The p-value represents the likelihood that experimental results happened by chance. Within psychology, the most common standard for p-values is “p < .05”. What this means is that there is less than a 5% probability that the results happened just by random chance, and therefore a 95% probability that the results reflect a meaningful pattern in human psychology. We call this statistical significance .

So, in the study above, if we assume that each infant was choosing equally, then the probability that 14 or more out of 16 infants would choose the helper toy is found to be 0.0021. We have only two logical possibilities: either the infants have a genuine preference for the helper toy, or the infants have no preference (50/50) and an outcome that would occur only 2 times in 1,000 iterations happened in this study. Because this p-value of 0.0021 is quite small, we conclude that the study provides very strong evidence that these infants have a genuine preference for the helper toy.

If we compare the p-value to some cut-off value, like 0.05, we see that the p=value is smaller. Because the p-value is smaller than that cut-off value, then we reject the hypothesis that only random chance was at play here. In this case, these researchers would conclude that significantly more than half of the infants in the study chose the helper toy, giving strong evidence of a genuine preference for the toy with the helping behavior.

Drawing Conclusions from Statistics

Generalizability.

Photo of a diverse group of college-aged students.

One limitation to the study mentioned previously about the babies choosing the “helper” toy is that the conclusion only applies to the 16 infants in the study. We don’t know much about how those 16 infants were selected. Suppose we want to select a subset of individuals (a sample ) from a much larger group of individuals (the population ) in such a way that conclusions from the sample can be generalized to the larger population. This is the question faced by pollsters every day.

Example 3 : The General Social Survey (GSS) is a survey on societal trends conducted every other year in the United States. Based on a sample of about 2,000 adult Americans, researchers make claims about what percentage of the U.S. population consider themselves to be “liberal,” what percentage consider themselves “happy,” what percentage feel “rushed” in their daily lives, and many other issues. The key to making these claims about the larger population of all American adults lies in how the sample is selected. The goal is to select a sample that is representative of the population, and a common way to achieve this goal is to select a r andom sample  that gives every member of the population an equal chance of being selected for the sample. In its simplest form, random sampling involves numbering every member of the population and then using a computer to randomly select the subset to be surveyed. Most polls don’t operate exactly like this, but they do use probability-based sampling methods to select individuals from nationally representative panels.

In 2004, the GSS reported that 817 of 977 respondents (or 83.6%) indicated that they always or sometimes feel rushed. This is a clear majority, but we again need to consider variation due to random sampling . Fortunately, we can use the same probability model we did in the previous example to investigate the probable size of this error. (Note, we can use the coin-tossing model when the actual population size is much, much larger than the sample size, as then we can still consider the probability to be the same for every individual in the sample.) This probability model predicts that the sample result will be within 3 percentage points of the population value (roughly 1 over the square root of the sample size, the margin of error. A statistician would conclude, with 95% confidence, that between 80.6% and 86.6% of all adult Americans in 2004 would have responded that they sometimes or always feel rushed.

The key to the margin of error is that when we use a probability sampling method, we can make claims about how often (in the long run, with repeated random sampling) the sample result would fall within a certain distance from the unknown population value by chance (meaning by random sampling variation) alone. Conversely, non-random samples are often suspect to bias, meaning the sampling method systematically over-represents some segments of the population and under-represents others. We also still need to consider other sources of bias, such as individuals not responding honestly. These sources of error are not measured by the margin of error.

Cause and Effect

In many research studies, the primary question of interest concerns differences between groups. Then the question becomes how were the groups formed (e.g., selecting people who already drink coffee vs. those who don’t). In some studies, the researchers actively form the groups themselves. But then we have a similar question—could any differences we observe in the groups be an artifact of that group-formation process? Or maybe the difference we observe in the groups is so large that we can discount a “fluke” in the group-formation process as a reasonable explanation for what we find?

Example 4 : A psychology study investigated whether people tend to display more creativity when they are thinking about intrinsic (internal) or extrinsic (external) motivations (Ramsey & Schafer, 2002, based on a study by Amabile, 1985). The subjects were 47 people with extensive experience with creative writing. Subjects began by answering survey questions about either intrinsic motivations for writing (such as the pleasure of self-expression) or extrinsic motivations (such as public recognition). Then all subjects were instructed to write a haiku, and those poems were evaluated for creativity by a panel of judges. The researchers conjectured beforehand that subjects who were thinking about intrinsic motivations would display more creativity than subjects who were thinking about extrinsic motivations. The creativity scores from the 47 subjects in this study are displayed in Figure 26, where higher scores indicate more creativity.

Image showing a dot for creativity scores, which vary between 5 and 27, and the types of motivation each person was given as a motivator, either extrinsic or intrinsic.

In this example, the key question is whether the type of motivation affects creativity scores. In particular, do subjects who were asked about intrinsic motivations tend to have higher creativity scores than subjects who were asked about extrinsic motivations?

Figure 26 reveals that both motivation groups saw considerable variability in creativity scores, and these scores have considerable overlap between the groups. In other words, it’s certainly not always the case that those with extrinsic motivations have higher creativity than those with intrinsic motivations, but there may still be a statistical tendency in this direction. (Psychologist Keith Stanovich (2013) refers to people’s difficulties with thinking about such probabilistic tendencies as “the Achilles heel of human cognition.”)

The mean creativity score is 19.88 for the intrinsic group, compared to 15.74 for the extrinsic group, which supports the researchers’ conjecture. Yet comparing only the means of the two groups fails to consider the variability of creativity scores in the groups. We can measure variability with statistics using, for instance, the standard deviation: 5.25 for the extrinsic group and 4.40 for the intrinsic group. The standard deviations tell us that most of the creativity scores are within about 5 points of the mean score in each group. We see that the mean score for the intrinsic group lies within one standard deviation of the mean score for extrinsic group. So, although there is a tendency for the creativity scores to be higher in the intrinsic group, on average, the difference is not extremely large.

We again want to consider possible explanations for this difference. The study only involved individuals with extensive creative writing experience. Although this limits the population to which we can generalize, it does not explain why the mean creativity score was a bit larger for the intrinsic group than for the extrinsic group. Maybe women tend to receive higher creativity scores? Here is where we need to focus on how the individuals were assigned to the motivation groups. If only women were in the intrinsic motivation group and only men in the extrinsic group, then this would present a problem because we wouldn’t know if the intrinsic group did better because of the different type of motivation or because they were women. However, the researchers guarded against such a problem by randomly assigning the individuals to the motivation groups. Like flipping a coin, each individual was just as likely to be assigned to either type of motivation. Why is this helpful? Because this random assignment  tends to balance out all the variables related to creativity we can think of, and even those we don’t think of in advance, between the two groups. So we should have a similar male/female split between the two groups; we should have a similar age distribution between the two groups; we should have a similar distribution of educational background between the two groups; and so on. Random assignment should produce groups that are as similar as possible except for the type of motivation, which presumably eliminates all those other variables as possible explanations for the observed tendency for higher scores in the intrinsic group.

But does this always work? No, so by “luck of the draw” the groups may be a little different prior to answering the motivation survey. So then the question is, is it possible that an unlucky random assignment is responsible for the observed difference in creativity scores between the groups? In other words, suppose each individual’s poem was going to get the same creativity score no matter which group they were assigned to, that the type of motivation in no way impacted their score. Then how often would the random-assignment process alone lead to a difference in mean creativity scores as large (or larger) than 19.88 – 15.74 = 4.14 points?

We again want to apply to a probability model to approximate a p-value , but this time the model will be a bit different. Think of writing everyone’s creativity scores on an index card, shuffling up the index cards, and then dealing out 23 to the extrinsic motivation group and 24 to the intrinsic motivation group, and finding the difference in the group means. We (better yet, the computer) can repeat this process over and over to see how often, when the scores don’t change, random assignment leads to a difference in means at least as large as 4.41. Figure 27 shows the results from 1,000 such hypothetical random assignments for these scores.

Standard distribution in a typical bell curve.

Only 2 of the 1,000 simulated random assignments produced a difference in group means of 4.41 or larger. In other words, the approximate p-value is 2/1000 = 0.002. This small p-value indicates that it would be very surprising for the random assignment process alone to produce such a large difference in group means. Therefore, as with Example 2, we have strong evidence that focusing on intrinsic motivations tends to increase creativity scores, as compared to thinking about extrinsic motivations.

Notice that the previous statement implies a cause-and-effect relationship between motivation and creativity score; is such a strong conclusion justified? Yes, because of the random assignment used in the study. That should have balanced out any other variables between the two groups, so now that the small p-value convinces us that the higher mean in the intrinsic group wasn’t just a coincidence, the only reasonable explanation left is the difference in the type of motivation. Can we generalize this conclusion to everyone? Not necessarily—we could cautiously generalize this conclusion to individuals with extensive experience in creative writing similar the individuals in this study, but we would still want to know more about how these individuals were selected to participate.

Close-up photo of mathematical equations.

Statistical thinking involves the careful design of a study to collect meaningful data to answer a focused research question, detailed analysis of patterns in the data, and drawing conclusions that go beyond the observed data. Random sampling is paramount to generalizing results from our sample to a larger population, and random assignment is key to drawing cause-and-effect conclusions. With both kinds of randomness, probability models help us assess how much random variation we can expect in our results, in order to determine whether our results could happen by chance alone and to estimate a margin of error.

So where does this leave us with regard to the coffee study mentioned previously (the Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012 found that men who drank at least six cups of coffee a day had a 10% lower chance of dying (women 15% lower) than those who drank none)? We can answer many of the questions:

  • This was a 14-year study conducted by researchers at the National Cancer Institute.
  • The results were published in the June issue of the New England Journal of Medicine , a respected, peer-reviewed journal.
  • The study reviewed coffee habits of more than 402,000 people ages 50 to 71 from six states and two metropolitan areas. Those with cancer, heart disease, and stroke were excluded at the start of the study. Coffee consumption was assessed once at the start of the study.
  • About 52,000 people died during the course of the study.
  • People who drank between two and five cups of coffee daily showed a lower risk as well, but the amount of reduction increased for those drinking six or more cups.
  • The sample sizes were fairly large and so the p-values are quite small, even though percent reduction in risk was not extremely large (dropping from a 12% chance to about 10%–11%).
  • Whether coffee was caffeinated or decaffeinated did not appear to affect the results.
  • This was an observational study, so no cause-and-effect conclusions can be drawn between coffee drinking and increased longevity, contrary to the impression conveyed by many news headlines about this study. In particular, it’s possible that those with chronic diseases don’t tend to drink coffee.

This study needs to be reviewed in the larger context of similar studies and consistency of results across studies, with the constant caution that this was not a randomized experiment. Whereas a statistical analysis can still “adjust” for other potential confounding variables, we are not yet convinced that researchers have identified them all or completely isolated why this decrease in death risk is evident. Researchers can now take the findings of this study and develop more focused studies that address new questions.

Explore these outside resources to learn more about applied statistics:

  • Video about p-values:  P-Value Extravaganza
  • Interactive web applets for teaching and learning statistics
  • Inter-university Consortium for Political and Social Research  where you can find and analyze data.
  • The Consortium for the Advancement of Undergraduate Statistics
  • Find a recent research article in your field and answer the following: What was the primary research question? How were individuals selected to participate in the study? Were summary results provided? How strong is the evidence presented in favor or against the research question? Was random assignment used? Summarize the main conclusions from the study, addressing the issues of statistical significance, statistical confidence, generalizability, and cause and effect. Do you agree with the conclusions drawn from this study, based on the study design and the results presented?
  • Is it reasonable to use a random sample of 1,000 individuals to draw conclusions about all U.S. adults? Explain why or why not.

How to Read Research

In this course and throughout your academic career, you’ll be reading journal articles (meaning they were published by experts in a peer-reviewed journal) and reports that explain psychological research. It’s important to understand the format of these articles so that you can read them strategically and understand the information presented. Scientific articles vary in content or structure, depending on the type of journal to which they will be submitted. Psychological articles and many papers in the social sciences follow the writing guidelines and format dictated by the American Psychological Association (APA). In general, the structure follows: abstract, introduction, methods, results, discussion, and references.

  • Abstract : the abstract is the concise summary of the article. It summarizes the most important features of the manuscript, providing the reader with a global first impression on the article. It is generally just one paragraph that explains the experiment as well as a short synopsis of the results.
  • Introduction : this section provides background information about the origin and purpose of performing the experiment or study. It reviews previous research and presents existing theories on the topic.
  • Method : this section covers the methodologies used to investigate the research question, including the identification of participants , procedures , and  materials  as well as a description of the actual procedure . It should be sufficiently detailed to allow for replication.
  • Results : the results section presents key findings of the research, including reference to indicators of statistical significance.
  • Discussion : this section provides an interpretation of the findings, states their significance for current research, and derives implications for theory and practice. Alternative interpretations for findings are also provided, particularly when it is not possible to conclude for the directionality of the effects. In the discussion, authors also acknowledge the strengths and limitations/weaknesses of the study and offer concrete directions about for future research.

Watch this 3-minute video for an explanation on how to read scholarly articles. Look closely at the example article shared just before the two minute mark.

https://digitalcommons.coastal.edu/kimbel-library-instructional-videos/9/

Practice identifying these key components in the following experiment: Food-Induced Emotional Resonance Improves Emotion Recognition.

In this chapter, you learned to

  • define and apply the scientific method to psychology
  • describe the strengths and weaknesses of descriptive, experimental, and correlational research
  • define the basic elements of a statistical investigation

Putting It Together: Psychological Research

Psychologists use the scientific method to examine human behavior and mental processes. Some of the methods you learned about include descriptive, experimental, and correlational research designs.

Watch the CrashCourse video to review the material you learned, then read through the following examples and see if you can come up with your own design for each type of study.

You can view the transcript for “Psychological Research: Crash Course Psychology #2” here (opens in new window).

Case Study: a detailed analysis of a particular person, group, business, event, etc. This approach is commonly used to to learn more about rare examples with the goal of describing that particular thing.

  • Ted Bundy was one of America’s most notorious serial killers who murdered at least 30 women and was executed in 1989. Dr. Al Carlisle evaluated Bundy when he was first arrested and conducted a psychological analysis of Bundy’s development of his sexual fantasies merging into reality (Ramsland, 2012). Carlisle believes that there was a gradual evolution of three processes that guided his actions: fantasy, dissociation, and compartmentalization (Ramsland, 2012). Read   Imagining Ted Bundy  (http://goo.gl/rGqcUv) for more information on this case study.

Naturalistic Observation : a researcher unobtrusively collects information without the participant’s awareness.

  • Drain and Engelhardt (2013) observed six nonverbal children with autism’s evoked and spontaneous communicative acts. Each of the children attended a school for children with autism and were in different classes. They were observed for 30 minutes of each school day. By observing these children without them knowing, they were able to see true communicative acts without any external influences.

Survey : participants are asked to provide information or responses to questions on a survey or structure assessment.

  • Educational psychologists can ask students to report their grade point average and what, if anything, they eat for breakfast on an average day. A healthy breakfast has been associated with better academic performance (Digangi’s 1999).
  • Anderson (1987) tried to find the relationship between uncomfortably hot temperatures and aggressive behavior, which was then looked at with two studies done on violent and nonviolent crime. Based on previous research that had been done by Anderson and Anderson (1984), it was predicted that violent crimes would be more prevalent during the hotter time of year and the years in which it was hotter weather in general. The study confirmed this prediction.

Longitudinal Study: researchers   recruit a sample of participants and track them for an extended period of time.

  • In a study of a representative sample of 856 children Eron and his colleagues (1972) found that a boy’s exposure to media violence at age eight was significantly related to his aggressive behavior ten years later, after he graduated from high school.

Cross-Sectional Study:  researchers gather participants from different groups (commonly different ages) and look for differences between the groups.

  • In 1996, Russell surveyed people of varying age groups and found that people in their 20s tend to report being more lonely than people in their 70s.

Correlational Design:  two different variables are measured to determine whether there is a relationship between them.

  • Thornhill et al. (2003) had people rate how physically attractive they found other people to be. They then had them separately smell t-shirts those people had worn (without knowing which clothes belonged to whom) and rate how good or bad their body oder was. They found that the more attractive someone was the more pleasant their body order was rated to be.
  • Clinical psychologists can test a new pharmaceutical treatment for depression by giving some patients the new pill and others an already-tested one to see which is the more effective treatment.

American Cancer Society. (n.d.). History of the cancer prevention studies. Retrieved from http://www.cancer.org/research/researchtopreventcancer/history-cancer-prevention-study

American Psychological Association. (2009). Publication Manual of the American Psychological Association (6th ed.). Washington, DC: Author.

American Psychological Association. (n.d.). Research with animals in psychology. Retrieved from https://www.apa.org/research/responsible/research-animals.pdf

Arnett, J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63(7), 602–614.

Barton, B. A., Eldridge, A. L., Thompson, D., Affenito, S. G., Striegel-Moore, R. H., Franko, D. L., . . . Crockett, S. J. (2005). The relationship of breakfast and cereal consumption to nutrient intake and body mass index: The national heart, lung, and blood institute growth and health study. Journal of the American Dietetic Association, 105(9), 1383–1389. Retrieved from http://dx.doi.org/10.1016/j.jada.2005.06.003

Chwalisz, K., Diener, E., & Gallagher, D. (1988). Autonomic arousal feedback and emotional experience: Evidence from the spinal cord injured. Journal of Personality and Social Psychology, 54, 820–828.

Dominus, S. (2011, May 25). Could conjoined twins share a mind? New York Times Sunday Magazine. Retrieved from http://www.nytimes.com/2011/05/29/magazine/could-conjoined-twins-share-a-mind.html?_r=5&hp&

Fanger, S. M., Frankel, L. A., & Hazen, N. (2012). Peer exclusion in preschool children’s play: Naturalistic observations in a playground setting. Merrill-Palmer Quarterly, 58, 224–254.

Fiedler, K. (2004). Illusory correlation. In R. F. Pohl (Ed.), Cognitive illusions: A handbook on fallacies and biases in thinking, judgment and memory (pp. 97–114). New York, NY: Psychology Press.

Frantzen, L. B., Treviño, R. P., Echon, R. M., Garcia-Dominic, O., & DiMarco, N. (2013). Association between frequency of ready-to-eat cereal consumption, nutrient intakes, and body mass index in fourth- to sixth-grade low-income minority children. Journal of the Academy of Nutrition and Dietetics, 113(4), 511–519.

Harper, J. (2013, July 5). Ice cream and crime: Where cold cuisine and hot disputes intersect. The Times-Picaune. Retrieved from http://www.nola.com/crime/index.ssf/2013/07/ice_cream_and_crime_where_hot.html

Jenkins, W. J., Ruppel, S. E., Kizer, J. B., Yehl, J. L., & Griffin, J. L. (2012). An examination of post 9-11 attitudes towards Arab Americans. North American Journal of Psychology, 14, 77–84.

Jones, J. M. (2013, May 13). Same-sex marriage support solidifies above 50% in U.S. Gallup Politics. Retrieved from http://www.gallup.com/poll/162398/sex-marriage-support-solidifies-above.aspx

Kobrin, J. L., Patterson, B. F., Shaw, E. J., Mattern, K. D., & Barbuti, S. M. (2008). Validity of the SAT for predicting first-year college grade point average (Research Report No. 2008-5). Retrieved from https://research.collegeboard.org/sites/default/files/publications/2012/7/researchreport-2008-5-validity-sat-predicting-first-year-college-grade-point-average.pdf

Lewin, T. (2014, March 5). A new SAT aims to realign with schoolwork. New York Times. Retreived from http://www.nytimes.com/2014/03/06/education/major-changes-in-sat-announced-by-college-board.html.

Lowry, M., Dean, K., & Manders, K. (2010). The link between sleep quantity and academic performance for the college student. Sentience: The University of Minnesota Undergraduate Journal of Psychology, 3(Spring), 16–19. Retrieved from http://www.psych.umn.edu/sentience/files/SENTIENCE_Vol3.pdf

McKie, R. (2010, June 26). Chimps with everything: Jane Goodall’s 50 years in the jungle. The Guardian. Retrieved from http://www.theguardian.com/science/2010/jun/27/jane-goodall-chimps-africa-interview

Offit, P. (2008). Autism’s false prophets: Bad science, risky medicine, and the search for a cure. New York: Columbia University Press.

Perkins, H. W., Haines, M. P., & Rice, R. (2005). Misperceiving the college drinking norm and related problems: A nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. J. Stud. Alcohol, 66(4), 470–478.

Rimer, S. (2008, September 21). College panel calls for less focus on SATs. The New York Times. Retrieved from http://www.nytimes.com/2008/09/22/education/22admissions.html?_r=0

Rothstein, J. M. (2004). College performance predictions and the SAT. Journal of Econometrics, 121, 297–317.

Rotton, J., & Kelly, I. W. (1985). Much ado about the full moon: A meta-analysis of lunar-lunacy research. Psychological Bulletin, 97(2), 286–306. doi:10.1037/0033-2909.97.2.286

Santelices, M. V., & Wilson, M. (2010). Unfair treatment? The case of Freedle, the SAT, and the standardization approach to differential item functioning. Harvard Education Review, 80, 106–134.

Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51, 515–530.

Tuskegee University. (n.d.). About the USPHS Syphilis Study. Retrieved from http://www.tuskegee.edu/about_us/centers_of_excellence/bioethics_center/about_the_usphs_syphilis_study.aspx.

CC licensed content, Original

  • Psychological Research Methods. Provided by : Karenna Malavanti. License : CC BY-SA: Attribution ShareAlike

CC licensed content, Shared previously

  • Psychological Research. Provided by : OpenStax College. License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction. Located at : https://openstax.org/books/psychology-2e/pages/2-introduction .
  • Why It Matters: Psychological Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/introduction-15/
  • Introduction to The Scientific Method. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:   https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-the-scientific-method/
  • Research picture. Authored by : Mediterranean Center of Medical Sciences. Provided by : Flickr. License : CC BY: Attribution   Located at : https://www.flickr.com/photos/mcmscience/17664002728 .
  • The Scientific Process. Provided by : Lumen Learning. License : CC BY-SA: Attribution ShareAlike   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-the-scientific-process/
  • Ethics in Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/ethics/
  • Ethics. Authored by : OpenStax College. Located at : https://openstax.org/books/psychology-2e/pages/2-4-ethics . License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction .
  • Introduction to Approaches to Research. Provided by : Lumen Learning. License : CC BY-NC-SA: Attribution NonCommercial ShareAlike   Located at:   https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-approaches-to-research/
  • Lec 2 | MIT 9.00SC Introduction to Psychology, Spring 2011. Authored by : John Gabrieli. Provided by : MIT OpenCourseWare. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : https://www.youtube.com/watch?v=syXplPKQb_o .
  • Paragraph on correlation. Authored by : Christie Napa Scollon. Provided by : Singapore Management University. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : http://nobaproject.com/modules/research-designs?r=MTc0ODYsMjMzNjQ%3D . Project : The Noba Project.
  • Descriptive Research. Provided by : Lumen Learning. License : CC BY-SA: Attribution ShareAlike   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-clinical-or-case-studies/
  • Approaches to Research. Authored by : OpenStax College.  License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction. Located at : https://openstax.org/books/psychology-2e/pages/2-2-approaches-to-research
  • Analyzing Findings. Authored by : OpenStax College. Located at : https://openstax.org/books/psychology-2e/pages/2-3-analyzing-findings . License : CC BY: Attribution . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction.
  • Experiments. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-conducting-experiments/
  • Research Review. Authored by : Jessica Traylor for Lumen Learning. License : CC BY: Attribution Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-conducting-experiments/
  • Introduction to Statistics. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/outcome-statistical-thinking/
  • histogram. Authored by : Fisher’s Iris flower data set. Provided by : Wikipedia.
  • License : CC BY-SA: Attribution-ShareAlike   Located at : https://en.wikipedia.org/wiki/Wikipedia:Meetup/DC/Statistics_Edit-a-thon#/media/File:Fisher_iris_versicolor_sepalwidth.svg .
  • Statistical Thinking. Authored by : Beth Chance and Allan Rossman . Provided by : California Polytechnic State University, San Luis Obispo.  
  • License : CC BY-NC-SA: Attribution-NonCommerci al-S hareAlike .  License Terms : http://nobaproject.com/license-agreement   Located at : http://nobaproject.com/modules/statistical-thinking . Project : The Noba Project.
  • Drawing Conclusions from Statistics. Authored by: Pat Carroll and Lumen Learning. Provided by : Lumen Learning. License : CC BY: Attribution   Located at: https://pressbooks.online.ucf.edu/lumenpsychology/chapter/reading-drawing-conclusions-from-statistics/
  • Statistical Thinking. Authored by : Beth Chance and Allan Rossman, California Polytechnic State University, San Luis Obispo. Provided by : Noba. License: CC BY-NC-SA: Attribution-NonCommercial-ShareAlike Located at : http://nobaproject.com/modules/statistical-thinking .
  • The Replication Crisis. Authored by : Colin Thomas William. Provided by : Ivy Tech Community College. License: CC BY: Attribution
  • How to Read Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/how-to-read-research/
  • What is a Scholarly Article? Kimbel Library First Year Experience Instructional Videos. 9. Authored by:  Joshua Vossler, John Watts, and Tim Hodge.  Provided by : Coastal Carolina University  License :  CC BY NC ND:  Attribution-NonCommercial-NoDerivatives Located at :  https://digitalcommons.coastal.edu/kimbel-library-instructional-videos/9/
  • Putting It Together: Psychological Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:  https://pressbooks.online.ucf.edu/lumenpsychology/chapter/putting-it-together-psychological-research/
  • Research. Provided by : Lumen Learning. License : CC BY: Attribution   Located at:

All rights reserved content

  • Understanding Driver Distraction. Provided by : American Psychological Association. License : Other. License Terms: Standard YouTube License Located at : https://www.youtube.com/watch?v=XToWVxS_9lA&list=PLxf85IzktYWJ9MrXwt5GGX3W-16XgrwPW&index=9 .
  • Correlation vs. Causality: Freakonomics Movie. License : Other. License Terms : Standard YouTube License Located at : https://www.youtube.com/watch?v=lbODqslc4Tg.
  • Psychological Research – Crash Course Psychology #2. Authored by : Hank Green. Provided by : Crash Course. License : Other. License Terms : Standard YouTube License Located at : https://www.youtube.com/watch?v=hFV71QPvX2I .

Public domain content

  • Researchers review documents. Authored by : National Cancer Institute. Provided by : Wikimedia. Located at : https://commons.wikimedia.org/wiki/File:Researchers_review_documents.jpg . License : Public Domain: No Known Copyright

grounded in objective, tangible evidence that can be observed time and time again, regardless of who is observing

well-developed set of ideas that propose an explanation for observed phenomena

(plural: hypotheses) tentative and testable statement about the relationship between two or more variables

an experiment must be replicable by another researcher

implies that a theory should enable us to make predictions about future events

able to be disproven by experimental results

implies that all data must be considered when evaluating a hypothesis

committee of administrators, scientists, and community members that reviews proposals for research involving human participants

process of informing a research participant about what to expect during an experiment, any risks involved, and the implications of the research, and then obtaining the person’s consent to participate

purposely misleading experiment participants in order to maintain the integrity of the experiment

when an experiment involved deception, participants are told complete and truthful information about the experiment at its conclusion

committee of administrators, scientists, veterinarians, and community members that reviews proposals for research involving non-human animals

research studies that do not test specific relationships between variables

research investigating the relationship between two or more variables

research method that uses hypothesis testing to make inferences about how one variable impacts and causes another

observation of behavior in its natural setting

inferring that the results for a sample apply to the larger population

when observations may be skewed to align with observer expectations

measure of agreement among observers on how they record and classify a particular event

observational research study focusing on one or a few people

list of questions to be answered by research participants—given as paper-and-pencil questionnaires, administered electronically, or conducted verbally—allowing researchers to collect data from a large number of people

subset of individuals selected from the larger population

overall group of individuals that the researchers are interested in

method of research using past records or data sets to answer various research questions, or to search for interesting patterns or relationships

studies in which the same group of individuals is surveyed or measured repeatedly over an extended period of time

compares multiple segments of a population at a single time

reduction in number of research participants as some drop out of the study over time

relationship between two or more variables; when two variables are correlated, one variable changes as the other does

number from -1 to +1, indicating the strength and direction of the relationship between variables, and usually represented by r

two variables change in the same direction, both becoming either larger or smaller

two variables change in different directions, with one becoming larger as the other becomes smaller; a negative correlation is not the same thing as no correlation

changes in one variable cause the changes in the other variable; can be determined only through an experimental research design

unanticipated outside factor that affects both variables of interest, often giving the false impression that changes in one variable causes changes in the other variable, when, in actuality, the outside factor causes changes in both variables

seeing relationships between two things when in reality no such relationship exists

tendency to ignore evidence that disproves ideas or beliefs

group designed to answer the research question; experimental manipulation is the only difference between the experimental and control groups, so any differences between the two are due to experimental manipulation rather than chance

serves as a basis for comparison and controls for chance factors that might influence the results of the study—by holding such factors constant across groups so that the experimental manipulation is the only difference between groups

description of what actions and operations will be used to measure the dependent variables and manipulate the independent variables

researcher expectations skew the results of the study

experiment in which the researcher knows which participants are in the experimental group and which are in the control group

experiment in which both the researchers and the participants are blind to group assignments

people's expectations or beliefs influencing or determining their experience in a given situation

variable that is influenced or controlled by the experimenter; in a sound experimental study, the independent variable is the only important difference between the experimental and control group

variable that the researcher measures to see how much effect the independent variable had

subjects of psychological research

subset of a larger population in which every member of the population has an equal chance of being selected

method of experimental group assignment in which all participants have an equal chance of being assigned to either group

consistency and reproducibility of a given result

accuracy of a given result in measuring what it is designed to measure

determines how likely any difference between experimental groups is due to chance

statistical probability that represents the likelihood that experimental results happened by chance

Psychological Science is the scientific study of mind, brain, and behavior. We will explore what it means to be human in this class. It has never been more important for us to understand what makes people tick, how to evaluate information critically, and the importance of history. Psychology can also help you in your future career; indeed, there are very little jobs out there with no human interaction!

Because psychology is a science, we analyze human behavior through the scientific method. There are several ways to investigate human phenomena, such as observation, experiments, and more. We will discuss the basics, pros and cons of each! We will also dig deeper into the important ethical guidelines that psychologists must follow in order to do research. Lastly, we will briefly introduce ourselves to statistics, the language of scientific research. While reading the content in these chapters, try to find examples of material that can fit with the themes of the course.

To get us started:

  • The study of the mind moved away Introspection to reaction time studies as we learned more about empiricism
  • Psychologists work in careers outside of the typical "clinician" role. We advise in human factors, education, policy, and more!
  • While completing an observation study, psychologists will work to aggregate common themes to explain the behavior of the group (sample) as a whole. In doing so, we still allow for normal variation from the group!
  • The IRB and IACUC are important in ensuring ethics are maintained for both human and animal subjects

Psychological Science: Understanding Human Behavior Copyright © by Karenna Malavanti is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Understanding Methods for Research in Psychology

A Psychology Research Methods Study Guide

Types of Research in Psychology

  • Cross-Sectional vs. Longitudinal Research
  • Reliability and Validity

Glossary of Terms

Research in psychology focuses on a variety of topics , ranging from the development of infants to the behavior of social groups. Psychologists use the scientific method to investigate questions both systematically and empirically.

Research in psychology is important because it provides us with valuable information that helps to improve human lives. By learning more about the brain, cognition, behavior, and mental health conditions, researchers are able to solve real-world problems that affect our day-to-day lives.

At a Glance

Knowing more about how research in psychology is conducted can give you a better understanding of what those findings might mean to you. Psychology experiments can range from simple to complex, but there are some basic terms and concepts that all psychology students should understand.

Start your studies by learning more about the different types of research, the basics of experimental design, and the relationships between variables.

Research in Psychology: The Basics

The first step in your review should include a basic introduction to psychology research methods . Psychology research can have a variety of goals. What researchers learn can be used to describe, explain, predict, or change human behavior.

Psychologists use the scientific method to conduct studies and research in psychology. The basic process of conducting psychology research involves asking a question, designing a study, collecting data, analyzing results, reaching conclusions, and sharing the findings.

The Scientific Method in Psychology Research

The steps of the scientific method in psychology research are:

  • Make an observation
  • Ask a research question and make predictions about what you expect to find
  • Test your hypothesis and gather data
  • Examine the results and form conclusions
  • Report your findings

Research in psychology can take several different forms. It can describe a phenomenon, explore the causes of a phenomenon, or look at relationships between one or more variables. Three of the main types of psychological research focus on:

Descriptive Studies

This type of research can tell us more about what is happening in a specific population. It relies on techniques such as observation, surveys, and case studies.

Correlational Studies

Correlational research is frequently used in psychology to look for relationships between variables. While research look at how variables are related, they do not manipulate any of the variables.

While correlational studies can suggest a relationship between two variables, finding a correlation does not prove that one variable causes a change in another. In other words, correlation does not equal causation.

Experimental Research Methods

Experiments are a research method that can look at whether changes in one variable cause changes in another. The simple experiment is one of the most basic methods of determining if there is a cause-and-effect relationship between two variables.

A simple experiment utilizes a control group of participants who receive no treatment and an experimental group of participants who receive the treatment.

Experimenters then compare the results of the two groups to determine if the treatment had an effect.

Cross-Sectional vs. Longitudinal Research in Psychology

Research in psychology can also involve collecting data at a single point in time, or gathering information at several points over a period of time.

Cross-Sectional Research

In a cross-sectional study , researchers collect data from participants at a single point in time. These are descriptive type of research and cannot be used to determine cause and effect because researchers do not manipulate the independent variables.

However, cross-sectional research does allow researchers to look at the characteristics of the population and explore relationships between different variables at a single point in time.

Longitudinal Research

A longitudinal study is a type of research in psychology that involves looking at the same group of participants over a period of time. Researchers start by collecting initial data that serves as a baseline, and then collect follow-up data at certain intervals. These studies can last days, months, or years. 

The longest longitudinal study in psychology was started in 1921 and the study is planned to continue until the last participant dies or withdraws. As of 2003, more than 200 of the partipants were still alive.

The Reliability and Validity of Research in Psychology

Reliability and validity are two concepts that are also critical in psychology research. In order to trust the results, we need to know if the findings are consistent (reliability) and that we are actually measuring what we think we are measuring (validity).

Reliability

Reliability is a vital component of a valid psychological test. What is reliability? How do we measure it? Simply put, reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly.

When determining the merits of a psychological test, validity is one of the most important factors to consider. What exactly is validity? One of the greatest concerns when creating a psychological test is whether or not it actually measures what we think it is measuring.

For example, a test might be designed to measure a stable personality trait but instead measures transitory emotions generated by situational or environmental conditions. A valid test ensures that the results accurately reflect the dimension undergoing assessment.

Review some of the key terms that you should know and understand about psychology research methods. Spend some time studying these terms and definitions before your exam. Some key terms that you should know include:

  • Correlation
  • Demand characteristic
  • Dependent variable
  • Hawthorne effect
  • Independent variable
  • Naturalistic observation
  • Placebo effect
  • Random assignment
  • Replication
  • Selective attrition

Erol A.  How to conduct scientific research ?  Noro Psikiyatr Ars . 2017;54(2):97-98. doi:10.5152/npa.2017.0120102

Aggarwal R, Ranganathan P. Study designs: Part 2 - Descriptive studies .  Perspect Clin Res . 2019;10(1):34-36. doi:10.4103/picr.PICR_154_18

Curtis EA, Comiskey C, Dempsey O. Importance and use of correlational research .  Nurse Res . 2016;23(6):20-25. doi:10.7748/nr.2016.e1382

Wang X, Cheng Z. Cross-sectional studies: Strengths, weaknesses, and recommendations .  Chest . 2020;158(1S):S65-S71. doi:10.1016/j.chest.2020.03.012

Caruana EJ, Roman M, Hernández-Sánchez J, Solli P. Longitudinal studies .  J Thorac Dis . 2015;7(11):E537-E540. doi:10.3978/j.issn.2072-1439.2015.10.63

Stanford Magazine. The vexing legacy of Lewis Terman .

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Explore Psychology

Psychological Research Methods: Types and Tips

Categories Research Methods

Psychological research methods are the techniques used by scientists and researchers to study human behavior and mental processes. These methods are used to gather empirical evidence.

The goal of psychological research methods is to obtain objective and verifiable data collected through scientific experimentation and observation. 

The research methods that are used in psychology are crucial for understanding how and why people behave the way they do, as well as for developing and testing theories about human behavior.

Table of Contents

Reasons to Learn More About Psychological Research Methods

One of the key goals of psychological research is to make sure that the data collected is reliable and valid.

  • Reliability means that the data is consistent and can be replicated
  • Validity refers to the accuracy of the data collected

Researchers must take great care to ensure that their research methods are reliable and valid, as this is essential for drawing accurate conclusions and making valid claims about human behavior.

High school and college students who are interested in psychology can benefit greatly from learning about research methods. Understanding how psychologists study human behavior and mental processes can help students develop critical thinking skills and a deeper appreciation for the complexity of human behavior.

Having an understanding of these research methods can prepare students for future coursework in psychology, as well as for potential careers in the field.

Quantitative vs. Qualitative Psychological Research Methods

Psychological research methods can be broadly divided into two main types: quantitative and qualitative. These two methods differ in their approach to data collection and analysis.

Quantitative Research Methods

Quantitative research methods involve collecting numerical data through controlled experiments, surveys, and other objective measures.

The goal of quantitative research is to identify patterns and relationships in the data that can be analyzed statistically.

Researchers use statistical methods to test hypotheses, identify significant differences between groups, and make predictions about future behavior.

Qualitative Research Methods

Qualitative research methods, on the other hand, involve collecting non-numerical data through open-ended interviews, observations, and other subjective measures.

Qualitative research aims to understand the subjective experiences and perspectives of individuals and groups.

Researchers use methods such as content analysis and thematic analysis to identify themes and patterns in the data and to develop rich descriptions of the phenomenon under study.

How Quantitative and Qualitative Methods Are Used

While quantitative and qualitative research methods differ in their approach to data collection and analysis, they are often used together to gain a more complete understanding of complex phenomena.

For example, a researcher studying the impact of social media on mental health might use a quantitative survey to gather numerical data on social media use and a qualitative interview to gain insight into participants’ subjective experiences with social media.

Types of Psychological Research Methods

There are several types of research methods used in psychology, including experiments, surveys, case studies, and observational studies. Each method has its strengths and weaknesses, and researchers must choose the most appropriate method based on their research question and the data they hope to collect.

Case Studies

A case study is a research method used in psychology to investigate an individual, group, or event in great detail. In a case study, the researcher gathers information from a variety of sources, including:

  • Observation
  • Document analysis

These methods allow researchers to gain an in-depth understanding of the case being studied.

Case studies are particularly useful when the phenomenon under investigation is rare or complex, and when it is difficult to replicate in a laboratory setting.

Surveys are a commonly used research method in psychology that involve gathering data from a large number of people about their thoughts, feelings, behaviors, and attitudes.

Surveys can be conducted in a variety of ways, including:

  • In-person interviews
  • Online questionnaires
  • Paper-and-pencil surveys

Surveys are particularly useful when researchers want to study attitudes or behaviors that are difficult to observe directly or when they want to generalize their findings to a larger population.

Experimental Psychological Research Methods

Experimental studies are a research method commonly used in psychology to investigate cause-and-effect relationships between variables. In an experimental study, the researcher manipulates one or more variables to see how they affect another variable, while controlling for other factors that may influence the outcome.

Experimental studies are considered the gold standard for establishing cause-and-effect relationships, as they allow researchers to control for potential confounding variables and to manipulate variables in a systematic way.

Correlational Psychological Research Methods

Correlational research is a research method used in psychology to investigate the relationship between two or more variables without manipulating them. The goal of correlational research is to determine the extent to which changes in one variable are associated with changes in another variable.

In other words, correlational research aims to establish the direction and strength of the relationship between two or more variables.

Naturalistic Observation

Naturalistic observation is a research method used in psychology to study behavior in natural settings, without any interference or manipulation from the researcher.

The goal of naturalistic observation is to gain insight into how people or animals behave in their natural environment without the influence of laboratory conditions.

Meta-Analysis

A meta-analysis is a research method commonly used in psychology to combine and analyze the results of multiple studies on a particular topic.

The goal of a meta-analysis is to provide a comprehensive and quantitative summary of the existing research on a topic, in order to identify patterns and relationships that may not be apparent in individual studies.

Tips for Using Psychological Research Methods

Here are some tips for high school and college students who are interested in using psychological research methods:

Understand the different types of research methods: 

Before conducting any research, it is important to understand the different types of research methods that are available, such as surveys, case studies, experiments, and naturalistic observation.

Each method has its strengths and limitations, and selecting the appropriate method depends on the research question and variables being investigated.

Develop a clear research question: 

A good research question is essential for guiding the research process. It should be specific, clear, and relevant to the field of psychology. It is also important to consider ethical considerations when developing a research question.

Use proper sampling techniques: 

Sampling is the process of selecting participants for a study. It is important to use proper sampling techniques to ensure that the sample is representative of the population being studied.

Random sampling is considered the gold standard for sampling, but other techniques, such as convenience sampling, may also be used depending on the research question.

Use reliable and valid measures:

It is important to use reliable and valid measures to ensure the data collected is accurate and meaningful. This may involve using established measures or developing new measures and testing their reliability and validity.

Consider ethical issues:

It is important to consider ethical considerations when conducting psychological research, such as obtaining informed consent from participants, maintaining confidentiality, and minimizing any potential harm to participants.

In many cases, you will need to submit your study proposal to your school’s institutional review board for approval.

Analyze and interpret the data appropriately : 

After collecting the data, it is important to analyze and interpret the data appropriately. This may involve using statistical techniques to identify patterns and relationships between variables, and using appropriate software tools for analysis.

Communicate findings clearly: 

Finally, it is important to communicate the findings clearly in a way that is understandable to others. This may involve writing a research report, giving a presentation, or publishing a paper in a scholarly journal.

Clear communication is essential for advancing the field of psychology and informing future research.

Frequently Asked Questions

What are the 5 methods of psychological research.

The five main methods of psychological research are:

  • Experimental research : This method involves manipulating one or more independent variables to observe their effect on one or more dependent variables while controlling for other variables. The goal is to establish cause-and-effect relationships between variables.
  • Correlational research : This method involves examining the relationship between two or more variables, without manipulating them. The goal is to determine whether there is a relationship between the variables and the strength and direction of that relationship.
  • Survey research : This method involves gathering information from a sample of participants using questionnaires or interviews. The goal is to collect data on attitudes, opinions, behaviors, or other variables of interest.
  • Case study research : This method involves an in-depth analysis of a single individual, group, or event. The goal is to gain insight into specific behaviors, attitudes, or phenomena.
  • Naturalistic observation research : This method involves observing and recording behavior in natural settings without any manipulation or interference from the researcher. The goal is to gain insight into how people or animals behave in their natural environment.

What is the most commonly used psychological research method?

The most common research method used in psychology varies depending on the research question and the variables being investigated. However, correlational research is one of the most frequently used methods in psychology.

This is likely because correlational research is useful in studying a wide range of psychological phenomena, and it can be used to examine the relationships between variables that cannot be manipulated or controlled, such as age, gender, and personality traits. 

Experimental research is also a widely used method in psychology, particularly in the areas of cognitive psychology , social psychology , and developmental psychology .

Other methods, such as survey research, case study research, and naturalistic observation, are also commonly used in psychology research, depending on the research question and the variables being studied.

How do you know which research method to use?

Deciding which type of research method to use depends on the research question, the variables being studied, and the practical considerations involved. Here are some general guidelines to help students decide which research method to use:

  • Identify the research question : The first step is to clearly define the research question. What are you trying to study? What is the hypothesis you want to test? Answering these questions will help you determine which research method is best suited for your study.
  • Choose your variables : Identify the independent and dependent variables involved in your research question. This will help you determine whether an experimental or correlational research method is most appropriate.
  • Consider your resources : Think about the time, resources, and ethical considerations involved in conducting the research. For example, if you are working on a tight budget, a survey or correlational research method may be more feasible than an experimental study.
  • Review existing literature : Conducting a literature review of previous studies on the topic can help you identify the most appropriate research method. This can also help you identify gaps in the literature that your study can fill.
  • Consult with a mentor or advisor : If you are still unsure which research method to use, consult with a mentor or advisor who has experience in conducting research in your area of interest. They can provide guidance and help you make an informed decision.

Scholtz SE, de Klerk W, de Beer LT. The use of research methods in psychological research: A systematised review . Front Res Metr Anal . 2020;5:1. doi:10.3389/frma.2020.00001

Palinkas LA. Qualitative and mixed methods in mental health services and implementation research . J Clin Child Adolesc Psychol . 2014;43(6):851-861. doi:10.1080/15374416.2014.910791

Crowe S, Cresswell K, Robertson A, Huby G, Avery A, Sheikh A. The case study approach . BMC Med Res Methodol . 2011;11(1):100. doi:10.1186/1471-2288-11-100

American Psychological Association Logo

Research Methods in Psychology

Psyclearn essentials.

Learn how researchers in psychology conduct their studies and better appreciate and critique the research presented in news media, in other courses, or in the psychological research literature.

Presented in collaboration with

PsycLearn: Engage students, advance learning, elevate psychology

APA PsycLearn provides instructors with a complete, all-digital course curriculum to immerse students in a personalized learning experience.

case studies research methods in psychology

Quantitative Research Methods

Principles of design and ethics for research in psychology

case studies research methods in psychology

Data Analysis for the Behavioral Sciences

A concepts-focused introduction to basic descriptive and inferential statistics

case studies research methods in psychology

Qualitative Research in Psychology

Basic qualitative methods like narrative inquiry and ethnography are introduced

More events and training

Apa event calendar.

Upcoming conferences, events and trainings.

Training and Webinars

Live and on-demand learning on topics for scientists, practitioners and applied psychologists.

  • Essay Editor

Writing a Psychology Case Study: Mastering the Skill

Writing a Psychology Case Study: Mastering the Skill

Creating case studies is an exciting and challenging assignment, isn't it? You are to combine theoretical data and practical skills when writing this paper. Analytical thinking is also a great help in this field. The project is quite useful in psychological, medical, educational, and social spheres.

Why are case studies important in psychology? They give a wonderful chance

  • to understand personal behavior and manners,
  • to investigate symptoms and offer effective treatment,
  • to interpret group or individual identity. 

Today we'll discuss the meaning of psychological case studies – the definition, types, and benefits – and offer a few useful tips on how to write these papers. So let's start the exploration.

Case Studies in Psychology: Making an Overview

Generally, a case study means an extensive analysis of a person, group, or episode. It may concern any aspect of the target's life. This method appears to be effective when it is impossible to carry out an experiment. 

Well, what is a case study in psychology? A psychology case study implies a focused information gathering in terms of life reality – behavior, manners, habits, and whatnot. Mostly, it touches on the practical state of affairs, not theoretical matters. You may collect data by psychometric testing, observing, interviewing, and looking through archival materials. The process may resemble looking at the target object through a magnifier.

Due to psychology case studies nature, they play a crucial role in human mind investigation:

  • give a meticulous description of personal or collective behavior;
  • help to examine the specificity of every unique case;
  • provide practical evidence for theoretical hypotheses;
  • bring a complete understanding of the investigated phenomenon;
  • produce a wide range of practical applications.

Case Studies in Psychology: Types and Features

The case study method in psychology is a complicated issue. There are different types of studies, and each of them assists in a separate field.

Descriptive

They are held to formulate a detailed description of the particular case, especially for approving a hypothesis.

Exploratory 

Usually they are a start for further, more comprehensive investigation.

Explanatory

They are used to define the reasons for a researched matter.

Instrumental

The target of the observation serves as a tool for illustrating any psychological theory.

Intrinsic

Giving data about specific aspects of a particular phenomenon, they imply investigating in personal interests.

Besides, they may be:

  • individual or collective (according to the number of target persons);
  • cross-sectional or longitudinal (marking a situation in a distinct time point or a long period correspondingly).

Benefits of Case Studies in Psychology

Psychological Case Studies have a few advantages if comparing this method with other investigation issues in this sphere:

  • It provides a bright picture of the phenomenon, showing its nuances and specificity.
  • It is quite easy to be carried out, especially in practical and ethical terms.
  • It gives a true-to-life, rather objective context.
  • It is a good educational tool.
  • It presents a possibility of flexible investigating, adaptable to current circumstances.

8 Hints on How to Write a Case Study in Psychology

If you come across creating a psychological case study, be attentive, observant, and patient. It is perfect if you have both analytical and storytelling skills. They occur rather helpful when writing this kind of paper. To simplify the situation we offer a few recommendations concerning the composing process.

  • Make up a subject profile. It should be specified enough, containing the target's name, age, status, and other necessary personal information.
  • single-subject or collective,
  • cross-sectional or longitudinal,
  • exploratory, explanatory, illustrative, instrumental, or others.
  • personal history,
  • various psychological factors (traits of character, emotional manifestations, and other similar matters),
  • social aspects (environmental impacts on the person)
  • events, having influenced the target greatly, etc.
  • Make a meticulous description of the target issue that is the focus of the investigation. As a rule, it is to comprise symptoms, problems, and behavior specificity. It is also advisable to record the exact time and duration of issue expressions if there are any.
  • Analyze all the gathered data.
  • Produce the diagnosis and offer a treatment strategy (therapy, medicines, changes in lifestyle, etc).
  • Comment on the process of treatment and its aims.
  • Make up a discussion section, interpreting all the results of the study and offering an area for further work.

Having explored the case study definition in psychology in detail, you are certain to realize what data to gather and how to perform a successful result without trouble. Moreover, at any moment you may turn to Aithor – an AI-powered generator – to get an example of a topical study case project.

7 General Tips for Writing a Psychology Case Study

There are a few additional tips on how to produce a fine case study in psychology.

  • Be sure that you may communicate with the target and operate with necessary information freely.
  • Prepare an elaborate study case outline.
  • Record every matter you get in the course of the investigation.
  • Respect the ethical norms.
  • Discuss the case with colleagues and professionals.
  • Analyze everything thoroughly.
  • Be precise, patient, and persistent.

To cap it all, case studies definition in psychology underlines the practical importance of carrying out such investigations. Learning the episode in detail helps in producing the adequate diagnoses and treatments. So, try to carry out the exploration in the most consistent and clear way possible. We hope that the presented recommendations will assist you in creating fine projects. Good luck!

Related articles

What is citation and why should you cite the sources when writing content.

When we write something for school, work, or just for fun, we often use ideas and facts from other places. This makes us ask: what is a citation in writing? Let's find out what this means and why it's really important when we write. What is Citation? Citation in research refers to the practice of telling your readers where you got your information, ideas, or exact words from. It's like showing them the path to the original information you used in your writing. When you cite something, you us ...

How To Write Essays Faster Using AI?

Creating various topical texts is an obligatory assignment during studies. For a majority of students, it seems like a real headache. It is quite difficult to write a smooth and complex work, meeting all the professors' requirements. However, thanks to modern technologies there appeared a good way of getting a decent project – using AI to write essays. We'd like to acquaint you with Aithor, an effective tool of this kind, able to perform fine and elaborated texts, and, of course, inspiration, i ...

How to Write a Dialogue in an Essay: Useful Tips

A correct usage of dialogues in essays may seem quite difficult at first sight. Still there are special issues, for instance, narrative or descriptive papers, where this literary technique will be a good helper in depicting anyone's character. How to add dialogues to the work? How to format them correctly? Let's discuss all relevant matters to master putting conversation episodes into academic essays. Essay Dialogue: Definition & Purpose A dialogue is a literary technique for presenting a con ...

Top 10 Use Cases for AI Writers

Writing is changing a lot because of AI. But don't worry — AI won't take human writers' jobs. It's a tool that can make our work easier and help us write better. When we use AI along with our own skills, we can create good content faster and better. AI can help with many parts of writing, from coming up with ideas to fixing the final version. Let's look at the top 10 ways how to use AI for content creation and how it can make your writing better. What Is AI Content Writing? AI content writin ...

Plagiarism: 7 Types in Detail

Your professor says that it is necessary to avoid plagiarism when writing a research paper, essay, or any project based on the works of other people, so to say, any reference source. But what does plagiarism mean? What types of it exist? And how to formulate the material to get rid of potential bad consequences while rendering original texts? Today we try to answer these very questions. Plagiarism: Aspect in Brief Plagiarism is considered to be a serious breach, able to spoil your successful ...

Paraphrasing vs Plagiarism: Do They Really Differ?

Academic assignments require much knowledge and skill. One of the most important points is rendering and interpreting material one has ever studied. A person should avoid presenting word-for-word plagiarism but express his or her thoughts and ideas as much as possible. However, every fine research is certain to be based on the previous issues, data given, or concepts suggested. And here it's high time to differentiate plagiarism and paraphrasing, to realize its peculiarities and cases of usage. ...

What Is Self-Plagiarism & How To Avoid It

Have you ever thought about whether using your own work again could be seen as copying? It might seem strange, but self-plagiarism is a real issue in school and work writing. Let's look at what this means and learn how to avoid self-plagiarism so your work stays original and ethical. What is self-plagiarism? Self-plagiarism, also called auto-plagiarism or duplicate plagiarism, happens when a writer uses parts of their old work without saying where it came from. This isn't just about copying w ...

Can Plagiarism Be Detected on PDF?

Plagiarism has been a challenge for a long time in writing. It's easy to find information online, which might make some people use it without saying where it came from. But plagiarism isn't just taking someone else's words. Sometimes, we might do it by accident or even use our own old work without mentioning it. When people plagiarize, they can get into serious trouble. They might lose others' trust or even face legal problems. Luckily, we now have tools to detect plagiarism. But what about PDF ...

Amanda Moses B S.Sc

Tips for Effective Case Formulations for Psychologists

Case formulation is a cornerstone of effective psychological practice..

Posted September 4, 2024 | Reviewed by Lybi Ma

  • What Is Therapy?
  • Take our Do I Need Therapy?
  • Find counselling near me

Case formulation is a cornerstone of effective psychological practice, providing a deep, nuanced understanding of each client. By synthesising data from client interviews, behavioural observations, and assessment results, psychologists can develop a comprehensive picture of the challenges their clients face, along with their goals . Refining your case formulation skills can significantly enhance treatment planning and lead to better client outcomes. Here are some key considerations.

1. Gather Information from Multiple Sources

To create a robust case formulation, gathering comprehensive data from diverse sources is essential. This includes conducting thorough clinical interviews, reviewing referral information and medical history, observing behaviours in real time, and administering relevant psychological assessments. Using a multi-method approach ensures that key elements of the client’s experience are not overlooked, resulting in a richer and more accurate understanding of the client.

2. Use a Structured Framework

One well-regarded framework for case formulation is the 5P model, which includes the following elements:

  • Presenting Problem : The client's primary issue or concern.
  • Predisposing Factors : Historical or long-standing vulnerabilities that have contributed to the development of the problem.
  • Precipitating Factors : Recent events or triggers that have led to the onset of the current issue.
  • Perpetuating Factors : Factors that are maintaining the problem over time.
  • Protective Factors : Strengths, resources, or coping strategies that may aid in recovery.

This framework offers a systematic way to organise complex information, allowing for a holistic view of the client's situation. By using the 5P model, psychologists can identify not only what is driving the current problem but also what supports the client in their efforts to manage it.

3. Consider the Broader Context

Understanding the broader context in which a client operates is crucial. Factors such as cultural background, gender , neurotype, family dynamics, social support networks, and environmental stressors can all influence the client’s presenting issues and how they experience them. Recognising and incorporating these contextual factors into the case formulation ensures that interventions are tailored to the client’s unique circumstances, which can significantly improve the effectiveness of treatment.

4. Collaborate with the Client

Case formulation should be a collaborative process. Engaging the client in developing their formulation promotes accuracy, helps ensure that their perspective is fully integrated, and increases their commitment to the therapeutic process. This collaborative approach also strengthens the therapeutic alliance, empowering the client to be an active participant in their treatment.

5. Keep the Formulation Dynamic

Case formulation is not static; over time, it should evolve as new information emerges or as the client’s circumstances and symptoms change. Regularly reviewing and refining the formulation ensures that it remains relevant and that treatment plans can be adjusted accordingly. A dynamic formulation allows the psychologist to remain responsive to the client’s shifting needs and make timely modifications to the therapeutic approach.

6. Ensure Clear Documentation

Well-documented case formulations are essential for continuity of care. Clear, concise documentation ensures that the formulation is a useful reference throughout the therapeutic process and can easily be updated when needed. Good documentation also supports collaborative care, as it allows other professionals involved in the client's treatment to quickly understand the case formulation if needed.

Effective case formulation is a fundamental aspect of psychological practice. It not only informs treatment planning but also deepens the understanding of the client's experiences, fostering a more empathetic and attuned approach to care. By gathering comprehensive information, using structured frameworks like the 5P model, considering the broader context, collaborating with clients, and refining formulations over time, psychologists—whether early in their careers or well-established—can provide more tailored, effective interventions. Ultimately, mastering case formulation strengthens both the therapeutic process and the client’s journey toward recovery, creating a solid foundation for meaningful change.

Amanda Moses B S.Sc

Amanda Moses, B S.Sc, is dually registered as a clinical and counseling psychologist in the UK and a psychologist and board-approved supervisor in Australia. She provides training and supervision to early career psychologists.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • International
  • New Zealand
  • South Africa
  • Switzerland
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

September 2024 magazine cover

It’s increasingly common for someone to be diagnosed with a condition such as ADHD or autism as an adult. A diagnosis often brings relief, but it can also come with as many questions as answers.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

Ecological Momentary Assessment (EMA)

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Ecological momentary assessment (EMA) is a research approach that gathers repeated, real-time data on participants’ experiences and behaviors in their natural environments.

This method, also known as experience sampling method (ESM), ambulatory assessment, or real-time data capture, aims to minimize recall bias and capture the dynamic fluctuations in thoughts, feelings, and actions as they unfold in daily life.

EMA typically involves prompting individuals to answer brief surveys or record specific events throughout the day using electronic devices or paper diaries.

This real-time data collection minimizes recall bias and offers a more accurate representation of an individual’s experience.

The repeated assessments collected in experience sampling studies allow researchers to study microprocesses that unfold over time, such as the relationship between stress and mood or the factors that trigger smoking relapse.

This makes EMA a valuable tool for researchers who want to study how people behave and feel in their natural environments.

Here are some key features of ecological momentary assessment:

  • Real-time assessment: Experience sampling involves asking participants to report on their experiences as they are happening, or shortly thereafter. This is typically done using electronic devices such as smartphones, but can also be done using paper diaries.
  • Repeated assessments: Experience sampling studies typically involve asking participants to complete multiple assessments throughout the day, over a period of several days or weeks. This allows researchers to track changes in participants’ experiences over time.
  • Focus on subjective experience: Experience sampling is often used to study subjective experiences such as moods, emotions, and thoughts. However, it can also be used to study objective behaviors such as smoking, eating, or social interaction.

How Experience Sampling Works

Participants are provided with a device..

Traditionally, EMA studies relied on preprogrammed digital wristwatches and paper assessment forms. Wristwatches could be pre-programmed to emit beeps at random or fixed intervals throughout the day, signaling participants to record their experiences.

Currently, smartphones are the dominant tool for both signaling and data collection in ESM studies.

Not all participants have equal access to or comfort with technology. Researchers need to consider the accessibility of mobile interfaces for participants with visual or hearing impairments, varying levels of technological literacy, and preferences for different input methods.

Consider the specific characteristics and needs of the target population when selecting devices and designing survey interfaces.

Sampling design .

EMA studies utilize specific sampling designs to determine when and how often participants are prompted to provide data.

Two primary sampling designs are commonly employed:

  • Time-based sampling: Participants receive prompts at predetermined times throughout the day. These times can be fixed intervals, such as every hour, or randomized within predefined time blocks. For example, a study might instruct participants to complete an assessment every 90 minutes between 7:30 a.m. and 10:30 p.m. for six consecutive days.
  • Event-based sampling: Participants are prompted to complete assessments whenever a specific event of interest occurs. This could include events like smoking a cigarette, having a social interaction, experiencing a specific symptom, or engaging in a particular activity.

Questionnaires items.

Participants receive prompts throughout the day: These prompts, often referred to as “beeps,” signal participants to answer a short questionnaire on their device.

The survey questions are carefully designed to capture information relevant to the research question. They often use validated scales to measure various psychological constructs, such as mood, stress, social connectedness, or symptoms.

Researchers should consider how long it takes to complete surveys, the frequency of assessments, and the overall burden on participants’ time and attention. Adjustments to the protocol (e.g., reducing survey length or frequency) might be necessary based on pilot participant feedback.

Researchers should assess whether survey items are clear, relevant, and appropriate for the context of participants’ daily lives.

The format of the questions can be open-ended, close-ended, or use scales, depending on the study’s aims. The questionnaires typically include questions about:

  • Current thoughts, feelings, and behaviors: This could include questions about mood or emotions, stress levels, urges, or social interactions.
  • Contextual factors: This may include questions about their physical location, company (who they are with), or activity at the time of the prompt.

Participants’ responses to these surveys are then aggregated and analyzed to identify patterns in their experiences over time.

Sensor data.

In addition to self-reported questionnaires, some EMA studies utilize sensors embedded in smartphones or wearable devices to collect passive data about the participant’s environment and behavior.

This could include data from GPS sensors, accelerometers, microphones, and other sensors that capture information about location, movement, social interactions, and physiological responses.

This sensor data can help researchers gain a richer understanding of the context surrounding participants’ experiences and potentially identify objective correlates of self-reported experiences.

Data management and analysis.

The richness of EMA data requires careful planning and specific analytic approaches to leverage its full potential.

EMA studies, particularly those using mobile devices, can generate large, complex datasets that require appropriate data management and analysis techniques.

Researchers need to plan for data cleaning, handling of missing data, and using statistical methods, such as multilevel modeling (also known as hierarchical linear modeling or mixed-effects modeling), to account for the hierarchical structure of EMA data.

  • Nested Structure: ESM studies yield data where repeated observations (Level 1) are nested within participants (Level 2). This means responses from the same individual are not independent, violating a core assumption of traditional statistical methods like ANOVA or simple regression.
  • Unequal Participation: Participants often contribute different numbers of data points due to variations in compliance, missed signals, or study design. This unequal participation further complicates analysis and necessitates approaches that can accommodate varying numbers of observations per participant.

Multilevel models explicitly account for this nested structure, allowing researchers to partition variance at both the within-person (Level 1) and between-person (Level 2) levels.

This enables accurate estimation of effects and avoids the misleading results that can occur when using traditional statistical methods that assume independence.

Various statistical software packages are available for multilevel modeling, including HLM, Mplus, R, and Stata.

Time-Based Sampling

Time-based sampling in Ecological Momentary Assessment (EMA) or the Experience Sampling Method (ESM) involves collecting data from participants at specific times throughout the day, as opposed to event-based sampling, which collects data when a particular event occurs.

The goal is to obtain a representative sample of a participant’s experiences over time.

There are three main types of time-based sampling schedules:

1. Fixed-interval schedules

Participants are prompted to report on their experiences at predetermined times. This could involve receiving a signal to complete a survey every hour, twice a day (e.g., morning and evening), or once a day.

Fixed-interval schedules allow researchers to study experiences that unfold predictably over time.

For instance, a study on mood changes throughout the workday might use a fixed-interval schedule to capture variations in mood at specific points during work hours.

2. Random-interval schedules

Participants are prompted to report their experiences at random intervals or based on a more complex time-based pattern.

Random interval sampling aims to minimize retrospective recall bias by obtaining a more random and representative sample of a participant’s day.

For example, a study investigating the relationship between stress and eating habits might use a variable-interval schedule to prompt participants to report their stress levels and food intake at unpredictable times throughout the day, capturing a broader range of daily experiences.

3. Time-stratified sampling

This strategy offers a more structured approach to random sampling. It involves dividing the total sampling time frame into smaller, predefined time blocks or strata, and then randomly selecting assessment times within each time block.

This method ensures a more even distribution of assessments across different times of the day while still maintaining some unpredictability.

Here’s how time-stratified sampling works:

  • Define the time blocks: The researcher first divides the total sampling window, such as a day or a specific period of the day, into smaller time blocks. For example, a study investigating mood fluctuations throughout the day might divide the day into two-hour blocks.
  • Randomize within blocks: Within each time block, the assessment times are randomly selected. For instance, in the mood study example, the researcher might program the EMA device to prompt participants for an assessment at a random time within each two-hour block.
  • Ensure coverage: By randomizing within blocks, researchers can ensure that each part of the day or the sampling window is represented in the data, as at least one assessment will occur within each block. This helps reduce the likelihood of missing data for specific times of the day and provides a more comprehensive view of the participant’s experiences.

For example, a researcher studying the association between stress and alcohol cravings among college students might use a time-stratified sampling approach with the following parameters:

  • Sampling window: 8:00 PM to 12:00 AM (4 hours) for seven consecutive days.
  • Time blocks: Two-hour blocks (8:00 PM – 10:00 PM and 10:00 PM – 12:00 AM).
  • Randomization: Participants are prompted twice daily, once at a random time within each two-hour block.

Considerations for Time-Based Sampling:

  • Frequency and timing of assessments: The frequency and timing of assessment prompts should be carefully considered based on the research question and the nature of the phenomenon being studied. For example, studying highly variable states like anxiety might require more frequent assessments compared to studying more stable states. Studies have used assessment frequencies ranging from every 30 minutes to daily assessments, with the choice dependent on the research question and participant burden.
  • Participant burden: Frequent assessments, especially at inconvenient times, can lead to participant burden and potentially affect compliance. Researchers should carefully balance the need for frequent data collection with the potential impact on participants’ daily lives.
  • Reactivity: Participants might adjust their behavior or experiences in anticipation of the prompts, especially with fixed-interval schedules. This reactivity can be mitigated to some extent by using random-interval schedules.
  • Data analysis: Time-based sampling designs require appropriate statistical methods for analyzing data collected at multiple time points, with multilevel modeling being a commonly used approach. The choice of statistical analysis should account for the nested structure of the data (i.e., multiple assessments within participants).

Event-Based Sampling

Event-based sampling, also known as event-contingent sampling, requires participants to complete an assessment each time a predefined event occurs.

This event could be an external event (e.g., a social interaction) or an internal event (e.g., a sudden surge of anxiety).

For example, instructing participants to record details about every cigarette they smoke, including time, location, mood, and social context.

Event-based protocols offer a valuable tool for researchers interested in gaining a deeper understanding of how specific events are experienced and the factors that influence them.

Research Questions

Event-based sampling designs are particularly well-suited for studying specific events or behaviors in people’s daily lives.

Questions focusing on the frequency and nature of events:

  • Social interactions exceeding a certain duration,
  • Conflicts or disagreements with colleagues or family members,
  • Instances of craving or substance use,
  • Panic attacks or other anxiety-provoking situations,
  • Headaches or other pain episodes.
  • What emotions are experienced during and after a social interaction?
  • What are the typical antecedents and consequences of a conflict?
  • What coping strategies are employed during a panic attack?

Questions exploring relationships between events and other variables:

  • Does engaging in a challenging work task lead to increased stress or fatigue?
  • Does receiving social support during a stressful event buffer against negative emotions?
  • Does engaging in a pleasant activity, like listening to music, improve mood?
  • Do frequent conflicts at work predict increased burnout or decreased job satisfaction?
  • Does experiencing daily positive events, such as connecting with loved ones, contribute to higher levels of happiness and life satisfaction?

Here are some key characteristics and considerations for event-based protocols:

  • Clear Event Definition: Event-based protocols require a clear definition of the target event to minimize ambiguity and ensure accurate recording. Researchers need to provide participants with specific instructions about what constitutes the event and when to initiate a recording. For example, in a study on smoking, researchers should specify whether a single puff constitutes a smoking event or if participants should only record instances when they smoke an entire cigarette.
  • Participant Initiation: In most cases, participants are responsible for recognizing the occurrence of the event and initiating the assessment. This assumes a certain level of awareness and willingness to interrupt their activity to record data.
  • Discrete: Events should have a clear beginning and end, making it easier to determine when to record data.
  • Salient: Events should be noticeable enough for participants to recognize and remember to record them.
  • Fairly Frequent: The event should occur frequently enough to provide sufficient data points for analysis, but not so frequently that it becomes burdensome.
  • Compliance Challenges: Verifying compliance with event-based protocols can be challenging as there’s no way to ensure participants record every instance of the target event. Participants might forget, be unable to record at the moment, or choose not to report certain events.
  • Potential for Bias: The data collected through event-based protocols might be biased toward more memorable, intense, or consciously recognized events. Events that are less salient or occur during periods of distraction might be underreported.

Hybrid Sampling Designs

Hybrid sampling in EMA research combines elements of different sampling designs, such as event-based sampling, fixed-interval sampling, and random-interval sampling, to leverage the strengths of each approach and address a wider range of research questions within a single study.

This approach is particularly valuable when researchers want to capture both the general flow of daily experiences and specific events that might be infrequent or easily missed with purely time-based sampling.

Here are some common ways researchers combine sampling designs in hybrid EMA studies:

Adding a daily diary component to an experience sampling study

Researchers often enhance experience sampling studies with a daily diary component, typically administered in the evening.

While the experience sampling portion provides insights into momentary experiences at random intervals, the daily diary can assess global aspects of the day, such as overall mood, sleep quality, significant events, or reflections on the day’s experiences.

For instance, a study could use experience sampling to assess momentary stress and coping strategies throughout the day and then use a daily diary to measure participants’ overall perceived stress for that day and their use of specific coping strategies across the entire day.

This combination allows researchers to understand how momentary experiences relate to more global daily perceptions. Some studies incorporate both morning and evening diaries to capture experiences surrounding sleep and the transition into and out of the study’s focus time frame.

Incorporating event-based surveys into time-based designs

One limitation of purely random-interval sampling is that it might not adequately capture specific events of interest, especially if they are infrequent or unpredictable.

To address this, researchers can augment time-based protocols with event-based surveys, prompting participants to complete additional assessments whenever a predefined event occurs.

For example, a study on social anxiety could use random-interval sampling to assess participants’ general mood and anxiety levels throughout the day and then trigger an event-based survey immediately after each social interaction exceeding a certain duration, allowing for a more detailed examination of anxiety experiences in social contexts.

This hybrid approach provides a more comprehensive understanding of both the general experience of anxiety and the specific factors that influence it in real-life situations.

Combining time-based designs at different time scales

Researchers can utilize different time-based sampling designs to examine phenomena across different time scales.

For example, a study investigating the long-term effects of a stress-reduction intervention could incorporate weekly assessments using fixed-interval sampling to track changes in overall stress levels.

Additionally, random-interval sampling with end-of-day diaries could be employed to capture daily fluctuations in stress and coping.

Finally, a more intensive experience sampling protocol could be implemented for a shorter period before and after the intervention to assess changes in momentary stress responses.

This multi-level approach allows researchers to gain a comprehensive understanding of how the intervention affects experiences across different time frames, from daily fluctuations to weekly trends.

EMA Protocols

A protocol outlines the procedures for collecting data using the ecological momentary assessment.

It acts as a blueprint, guiding researchers in gathering real-time, in-the-moment experiences from participants in their natural environments.

These protocols differ primarily in how and when they prompt participants to record their experiences.

The optimal choice depends on aligning the protocol with the research question, participant burden considerations, technological capabilities, and the intended data analysis approach.

Example of an EMA Protocol

A study investigating the relationship between daily stress and alcohol cravings might involve the following EMA protocol:

  • Device: Participants are provided with a smartphone app.
  • Sampling: Participants receive prompts randomly five times a day between 5 p.m. and 10 p.m. for one week.
  • Questionnaire: Each questionnaire asks participants to rate their current stress level, alcohol craving intensity, and to indicate whether they are alone or with others.
  • Sensor data: The app also passively collects GPS data to determine the participant’s location at each assessment.

By analyzing the collected data, researchers could examine how stress levels fluctuate throughout the evening, whether being alone or with others influences craving intensity, and if certain locations are associated with higher cravings.

Considerations when choosing a protocol

  • Research Questions: The choice of protocol should be guided by the research questions. If the study aims to understand the general flow of experiences throughout the day, time-based protocols might be suitable. If the goal is to investigate experiences related to specific events, an event-contingent protocol might be more appropriate.
  • Participant Burden: The frequency and timing of assessments can influence participant burden. Researchers should consider the demands of their chosen protocol and balance data collection needs with participant well-being.
  • Feasibility and Technology: The chosen protocol should be feasible to implement with the available technology. For example, event-contingent sampling might require more sophisticated programming or the use of sensors to detect specific events.
  • Data Analysis: The chosen protocol will influence the type of data analysis that can be performed. Researchers should consider their analysis plan when selecting a protocol.

Potential Pitfalls

By anticipating and addressing these potential pitfalls, EMA researchers can enhance the rigor, validity, and ethical soundness of their studies, contributing to a richer understanding of human experiences and behavior in everyday life.

  • To mitigate this, researchers must find a balance between collecting sufficient data and minimizing participant burden.
  • Researchers should carefully consider the number of study days, the frequency of daily assessments (“beeps”), and the length and complexity of the surveys.
  • Offering incentives can also encourage participation and completion.
  • Researchers need to ensure the chosen technology is compatible with participants’ devices and operating systems.
  • Signal delivery failures, such as notifications not appearing or calls going unanswered, need to be addressed.
  • Researchers should have contingency plans in case of system crashes or data loss.
  • Reactivity: Participants may alter their behavior or responses due to the awareness of being monitored. Researchers should be mindful of this and consider ways to minimize reactivity, such as using a less intrusive assessment schedule.
  • Response Bias: Participants may develop patterns of responding that do not reflect their true experiences (e.g., straightlining or acquiescence bias). Randomizing item order and offering a range of response options can help mitigate this.
  • Missing Data: Participants might miss assessments due to forgetfulness, inconvenience, or technical issues. Researchers should establish clear guidelines for handling missing data and consider using statistical techniques that account for missingness.
  • Researchers should be aware of this possibility and consider factors that might influence participation, such as age, occupation, comfort with technology, and privacy concerns.
  • Researchers must obtain informed consent, ensure data confidentiality, and address potential risks to participants’ privacy and well-being.
  • Data Analysis: Analyzing EMA data requires specialized statistical techniques, such as multilevel modeling, to account for the nested structure of the data (repeated measures within individuals). Researchers should be familiar with these techniques or collaborate with a statistician experienced in analyzing EMA data.
  • Formulating Research Questions: The dynamic nature of EMA data requires researchers to formulate specific research questions that differentiate between person-level and situation-level effects. Failure to do so can lead to ambiguous findings and misinterpretations.

Managing Missing Data

Missing data is an inherent challenge in experience sampling research. By understanding the nature and mechanisms of missingness, researchers can make informed decisions about study design, data cleaning, and statistical analysis.

Unlike cross-sectional studies, where missing data might involve a few skipped items or participant dropouts, daily life studies often grapple with substantial missingness across various dimensions.

Employing appropriate strategies to minimize, manage, and model missing data is crucial for enhancing the validity and reliability of EMA findings.

There are several strategies for handling missing data in EMA research, each with implications for data analysis and interpretation:
  • User-Friendly Design: Employing an intuitive and convenient survey system, as well as clear instructions and reminders, can enhance participant engagement and minimize avoidable missingness.
  • Strategic Sampling Schedule: Carefully considering the frequency and timing of assessments can reduce participant burden and improve response rates.
  • Incentivizing Participation: Appropriate incentives, such as monetary compensation or raffle entries, can motivate participants to respond consistently.
  • Detecting Random Responding: Identifying and addressing patterns of inconsistent or nonsensical responses, such as using standard deviations across items or examining responses to related items, can improve data quality.
  • Establishing Exclusion Criteria: Developing clear guidelines for excluding participants or assessment occasions based on pre-defined criteria, such as low response rates or technical errors, ensures data integrity. This might involve setting thresholds for low response rates, identifying technical errors, or flagging suspicious response patterns
  • Full-Information Maximum Likelihood (FIML) and Multiple Imputation: These advanced statistical techniques can handle missing data effectively, particularly in the context of multilevel modeling, which is commonly used in EMA research. These methods can provide relatively unbiased parameter estimates, even with complex missing data patterns.
  • Modeling Time: It is important to consider the role of time in EMA analyses. Depending on the research question, time can be treated as a predictor, an outcome, or incorporated into the model structure (e.g., autocorrelated residuals). However, they also acknowledge that time is often omitted in practice, particularly in intensive, within-day EMA studies, where random sampling is assumed to capture a representative sample of daily experiences.

Implications for Data Analysis and Interpretation:

  • Bias: Perhaps the most concerning implication of missing data is its potential to introduce bias into the findings, particularly if the missingness is systematically related to the variables under investigation. For example, if individuals experiencing high levels of stress are more likely to skip surveys, the results might underestimate the true relationship between stress and other variables.
  • Reduced Power: Missing data, especially if substantial, can reduce the study’s statistical power, making it more challenging to detect statistically significant effects. This means that real effects might be missed due to the reduced ability to discern them from random noise.
  • Interpretational Challenges: The often complex and multifaceted nature of missing data in EMA research can complicate the interpretation of findings. When the reasons behind the missingness are unclear, drawing firm conclusions about the relationships between variables becomes challenging. Researchers should be cautious in their interpretations and transparent about the limitations posed by missing data.

The Trade-off Between Ecological Validity and Reactivity

Ecological momentary assessment (EMA) research involves a delicate balancing act. Researchers aim for ecological validity by capturing experiences in their natural habitat, but must remain vigilant about reactivity and its potential to skew findings.

By understanding the factors that influence reactivity and strategically designing studies to mitigate it, researchers can harness the power of EMA to illuminate the nuances of human behavior and experience in the real world.

Ecological Validity : Capturing Life as It Happens

  • A primary goal of EMA is to achieve high ecological validity – the extent to which findings can be generalized to real-world settings.
  • Traditional research often relies on laboratory studies or retrospective self-reports, both of which can suffer from artificiality and recall bias.
  • EMA addresses these limitations by collecting data in participants’ natural environments, as they go about their daily lives. This in-the-moment assessment provides a more authentic window into people’s experiences and behaviors.
  • EMA is well-suited to studying phenomena that are context-dependent or influenced by situational factors.

Reactivity : The Observer Effect

  • Reactivity , a potential pitfall of EMA, refers to the phenomenon where the act of measurement itself influences the behavior or experience being studied.
  • Repeatedly prompting participants to reflect on their experiences might alter those experiences. For instance, asking individuals to track their mood multiple times a day could make them more self-aware and potentially change their emotional patterns.
  • Self-monitoring can be a component of behavior change interventions, further highlighting the potential for reactivity in EMA designs.

Navigating the Trade-off

Reactivity is not inevitable in EMA studies. Several factors can influence its likelihood:
  • Focus on behavior change: Reactivity is more likely when participants are actively trying to modify the target behavior. If the study focuses solely on observation and not on intervention, reactivity might be less of a concern.
  • Timing of recording: Recording a behavior before it occurs (e.g., asking participants if they intend to smoke in the next hour) can increase reactivity. Focusing on past behavior minimizes this risk.
  • Number of target behaviors: Assessing a single behavior repeatedly might heighten participants’ awareness and influence their actions. Studies tracking multiple behaviors or experiences are less likely to be reactive.
Researchers can employ strategies to minimize reactivity:
  • Ensuring anonymity and confidentiality: Assuring participants that their data will be kept private can reduce concerns about social desirability bias.
  • Framing the study objectives neutrally: Presenting the study goals in a way that does not imply a desired outcome can minimize participants’ attempts to control their responses.
  • Using a less intrusive assessment schedule: Reducing the frequency or duration of assessments can reduce participant burden and minimize self-awareness.

Ethical Considerations

Using intensive, repeated assessments in daily life research, while valuable for understanding human behavior in context, raises important ethical considerations.

Mitigating Participant Burden :

Participant burden refers to the effort and demands placed on participants due to the repeated nature of data collection, potentially impacting compliance and data quality.

Several strategies can be used to minimize the potential burden associated with frequent assessments:

  • Limiting survey length: Keeping surveys brief (ideally under 5-7 minutes) and using concise items is crucial.
  • Strategic sampling frequency: Finding a balance between data density and participant tolerance is key. While no definitive guidelines exist, 5-8 assessments per day might strike a reasonable balance for many studies. However, factors like survey length, study duration, and participant characteristics should guide these decisions.
  • Respecting participant time: Allowing participants to choose or adjust assessment windows (e.g., avoiding early mornings or late nights) can enhance compliance and minimize disruption.
  • “Livability functions”: Employing devices and apps that allow participants to mute or snooze notifications when necessary can prevent unwanted interruptions during sensitive situations.
  • Minimizing intrusiveness: Opting for familiar technologies (e.g., participants’ own smartphones) and user-friendly interfaces can reduce the burden of learning new systems and integrating them into daily routines.
  • Clear instructions and expectations: Providing comprehensive information about the study’s demands and procedures during the consent process and throughout data collection is essential. Anticipate common participant questions (e.g., regarding missed assessments, technical issues, study duration) and providing clear answers.
  • Regular check-ins: Maintaining contact with participants during the study (e.g., through emails or brief calls) can help identify and address potential issues, provide support, and reinforce engagement.
  • Transparency and feedback: Offering participants insights into the study’s goals and findings, as well as acknowledging their contributions, can foster a sense of collaboration and value.

Ensuring Informed Consent :

The need for robust informed consent procedures that go beyond traditional approaches to address the unique ethical challenges of intensive, repeated assessments:

  • Explicitly Addressing Burden: The consent process should clearly articulate the expected time commitment, frequency of assessments, and potential disruptions associated with study participation. Researchers should be transparent about the potential for burden and fatigue, even when using strategies to minimize them.
  • Flexibility and Control: Participants should be informed of their right to decline or reschedule assessments when necessary, without penalty. Emphasizing participant autonomy and control over their involvement is paramount.
  • Data Security and Privacy: Given the sensitive nature of data often collected in daily life research, the consent process must clearly outline data storage procedures, security measures, and plans for de-identification or anonymization to ensure participant confidentiality.
  • Addressing Reactivity Concerns: While reactivity to repeated assessments might be less prevalent than often assumed, the consent process should acknowledge this possibility and explain any measures taken to mitigate it.
  • Ongoing Dialogue: Informed consent should be viewed as an ongoing process rather than a one-time event. Researchers should create opportunities for participants to ask questions, express concerns, and receive clarification throughout the study.

Reading List

Hektner, J. M. (2007).  Experience sampling method: Measuring the quality of everyday life . Sage Publications.

Rintala, A., Wampers, M., Myin-Germeys, I., & Viechtbauer, W. (2019). Response compliance and predictors thereof in studies using the experience sampling method.  Psychological Assessment, 31 (2), 226–235.  https://doi.org/10.1037/pas0000662

Trull, T. J., & Ebner-Priemer, U. (2013). Ambulatory assessment .  Annual review of clinical psychology ,  9 (1), 151-176.

Van Berkel, N., Ferreira, D., & Kostakos, V. (2017). The experience sampling method on mobile devices.   ACM Computing Surveys (CSUR) ,  50 (6), 1-40.

Examples of ESM Studies

Bylsma, L. M., Taylor-Clift, A., & Rottenberg, J. (2011). Emotional reactivity to daily events in major and minor depression.  Journal of Abnormal Psychology, 120 (1), 155–167.  https://doi.org/10.1037/a0021662

Geschwind, N., Peeters, F., Drukker, M., van Os, J., & Wichers, M. (2011). Mindfulness training increases momentary positive emotions and reward experience in adults vulnerable to depression: A randomized controlled trial.  Journal of Consulting and Clinical Psychology, 79 (5), 618–628.  https://doi.org/10.1037/a0024595

Hoorelbeke, K., Koster, E. H. W., Demeyer, I., Loeys, T., & Vanderhasselt, M.-A. (2016). Effects of cognitive control training on the dynamics of (mal)adaptive emotion regulation in daily life.  Emotion, 16 (7), 945–956.  https://doi.org/10.1037/emo0000169

Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological momentary assessment .  Annu. Rev. Clin. Psychol. ,  4 (1), 1-32.

Kim, S., Park, Y., & Headrick, L. (2018). Daily micro-breaks and job performance: General work engagement as a cross-level moderator.  Journal of Applied Psychology, 103 (7), 772–786.  https://doi.org/10.1037/apl0000308

Shoham, A., Goldstein, P., Oren, R., Spivak, D., & Bernstein, A. (2017). Decentering in the process of cultivating mindfulness: An experience-sampling study in time and context.  Journal of Consulting and Clinical Psychology, 85 (2), 123–134.  https://doi.org/10.1037/ccp0000154

Steger, M. F., & Frazier, P. (2005). Meaning in Life: One Link in the Chain From Religiousness to Well-Being.  Journal of Counseling Psychology, 52 (4), 574–582.  https://doi.org/10.1037/0022-0167.52.4.574

Sun, J., Harris, K., & Vazire, S. (2020). Is well-being associated with the quantity and quality of social interactions?  Journal of Personality and Social Psychology, 119 (6), 1478–1496.  https://doi.org/10.1037/pspp0000272

Sun, J., Schwartz, H. A., Son, Y., Kern, M. L., & Vazire, S. (2020). The language of well-being: Tracking fluctuations in emotion experience through everyday speech.  Journal of Personality and Social Psychology, 118 (2), 364–387.  https://doi.org/10.1037/pspp0000244

Thewissen, V., Bentall, R. P., Lecomte, T., van Os, J., & Myin-Germeys, I. (2008). Fluctuations in self-esteem and paranoia in the context of daily life.  Journal of Abnormal Psychology, 117 (1), 143–153.  https://doi.org/10.1037/0021-843X.117.1.143

Thompson, R. J., Mata, J., Jaeggi, S. M., Buschkuehl, M., Jonides, J., & Gotlib, I. H. (2012). The everyday emotional experience of adults with major depressive disorder: Examining emotional instability, inertia, and reactivity.  Journal of Abnormal Psychology, 121 (4), 819–829.  https://doi.org/10.1037/a0027978

Van der Gucht, K., Dejonckheere, E., Erbas, Y., Takano, K., Vandemoortele, M., Maex, E., Raes, F., & Kuppens, P. (2019). An experience sampling study examining the potential impact of a mindfulness-based intervention on emotion differentiation.  Emotion, 19 (1), 123–131.  https://doi.org/10.1037/emo0000406

Print Friendly, PDF & Email

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 29 August 2024

Federal funding shapes knowledge in clinical science

  • Bunmi O. Olatunji 1 &
  • Alexandra M. Adamis 1  

Nature Reviews Psychology ( 2024 ) Cite this article

1 Altmetric

Metrics details

  • Psychiatric disorders
  • Scientific community

Advances in clinical science often rely on federal funding, but an overly prescriptive funding agenda might limit the societal benefits of clinical research. Greater diversity in funding schemes is needed to ensure the highest clinical impact.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 digital issues and online access to articles

55,14 € per year

only 4,60 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

Olatunji, B. O., Cisler, J. M. & Deacon, B. J. Efficacy of cognitive behavioral therapy for anxiety disorders: a review of meta-analytic findings. Psychiatr. Clin. North Am. 33 , 557–577 (2010).

Article   PubMed   Google Scholar  

Barlow, D. H. et al. The unified protocol for transdiagnostic treatment of emotional disorders compared with diagnosis-specific protocols for anxiety disorders: a randomized clinical trial. JAMA Psychiatry 74 , 875–884 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Insel, T. et al. Research domain criteria (RDoC): toward a new classification framework for research on mental disorders. Am. J. Psychiatry 167 , 748–751 (2010).

Torrey, E. F. Did the human genome project affect research on schizophrenia? Psychiatry Res 333 , 115691 (2024).

Abi-Dargham, A. et al. Candidate biomarkers in psychiatric disorders: state of the field. World Psychiatry 22 , 236–262 (2023).

Franklin, J. C. et al. Risk factors for suicidal thoughts and behaviors: a meta-analysis of 50 years of research. Psychol. Bull. 143 , 187–232 (2017).

Rogers, A. Star neuroscientist Tom Insel leaves the Google-spawned Verily for ... a startup? Wired https://www.wired.com/2017/05/star-neuroscientist-tom-insel-leaves-google-spawned-verily-startup/ (2017).

Kessler, R. C. et al. Changes in prevalence of mental illness among US adults during compared with before the COVID-19 pandemic. Psychiatr. Clin. North Am. 45 , 1–28 (2022).

Lebowitz, M. S. & Appelbaum, P. S. Biomedical explanations of psychopathology and their implications for attitudes and beliefs about mental disorders. Annu. Rev. Clin. Psychol. 15 , 555–577 (2019).

Lei, C., Qu, D., Liu, K. & Chen, R. Ecological momentary assessment and machine learning for predicting suicidal ideation among sexual and gender minority individuals. JAMA Netw. Open 6 , e2333164 (2023).

Download references

Author information

Authors and affiliations.

Department of Psychology, Vanderbilt University, Nashville, TN, USA

Bunmi O. Olatunji & Alexandra M. Adamis

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Bunmi O. Olatunji .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Olatunji, B.O., Adamis, A.M. Federal funding shapes knowledge in clinical science. Nat Rev Psychol (2024). https://doi.org/10.1038/s44159-024-00357-2

Download citation

Published : 29 August 2024

DOI : https://doi.org/10.1038/s44159-024-00357-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

case studies research methods in psychology

IMAGES

  1. Research Methods In Psychology 3Rd Edition / Research Methods in

    case studies research methods in psychology

  2. PPT

    case studies research methods in psychology

  3. PPT

    case studies research methods in psychology

  4. Case Study Research Method in Psychology

    case studies research methods in psychology

  5. Research Methods in Psychology

    case studies research methods in psychology

  6. (PDF) Chapter 7 METHODS OF RESEARCH IN PSYCHOLOGY

    case studies research methods in psychology

VIDEO

  1. Research Methodology Case Studies

  2. What are Case Studies in Research Methods-Psychology #psychology #youtube #ib #aslevel

  3. PSY 2120: Why study research methods in psychology?

  4. What is Case Study Method in Psychology Urdu I Hindi #Casestudymethod #casestudy

  5. AQA Psychology A Level Past Papers

  6. Cognitive Psychology Research Methods Experiments & Case Studies

COMMENTS

  1. Case Study Research Method in Psychology

    Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews). The case study research method originated in clinical medicine (the case history, i.e., the patient's personal history). In psychology, case studies are ...

  2. Research Methods In Psychology

    Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  3. Case Study: Definition, Examples, Types, and How to Write

    A case study is an in-depth study of one person, group, or event. In a case study, nearly every aspect of the subject's life and history is analyzed to seek patterns and causes of behavior. Case studies can be used in many different fields, including psychology, medicine, education, anthropology, political science, and social work.

  4. Case Study Methods and Examples

    Case study research is conducted by almost every social science discipline: business, education, sociology, psychology. Case study research, with its reliance on multiple sources, is also a natural choice for researchers interested in trans-, inter-, or cross-disciplinary studies. The Encyclopedia of case study research provides an overview:

  5. PDF APA Handbook of Research Methods in Psychology

    Mixed Methods Research in Psychology ..... 235 Timothy C. Guetterman and Analay Perez Chapter 13. The Cases W ithin Trials (CWT) Method: An Example ... Single-Case Experimental Design ..... 747 John M. Ferron, Megan Kirby, and Lodi Lipien ... Case Studies in Neuropsychology ..... 789 Randi C. Martin, Simon Fischer-Baum, and Corinne M. Pettigrew ...

  6. Understanding Case Study Method in Research: A Comprehensive Guide

    The case study method is an in-depth research strategy focusing on the detailed examination of a specific subject, situation, or group over time. It's employed across various disciplines to narrow broad research fields into manageable topics, enabling researchers to conduct detailed investigations in real-world contexts. This method is characterized by its intensive examination of individual ...

  7. What Is a Case Study in Psychology?

    A case study is a research method used in psychology to investigate a particular individual, group, or situation in depth. It involves a detailed analysis of the subject, gathering information from various sources such as interviews, observations, and documents. In a case study, researchers aim to understand the complexities and nuances of the ...

  8. What Is a Case Study?

    A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research. A case study research design usually involves qualitative methods, but quantitative methods are sometimes also used.

  9. APA Handbook of Research Methods in Psychology

    Mixed Methods Research in Psychology Timothy C. Guetterman and Analay Perez; Chapter 13. The "Cases Within Trials" (CWT) Method: An Example of a Mixed-Methods Research Design ... Case Studies in Neuropsychology Randi C. Martin, Simon Fischer-Baum, and Corinne M. Pettigrew; Chapter 36. Group Studies in Experimental Neuropsychology

  10. PDF APA Handbook of Research Methods in Psychology, Second Edition Sample Pages

    Mixed Methods Research in Psychology ..... 235 Timothy C. Guetterman and Analay Perez Chapter˜13. The "Cases Within Trials" (CWT) Method: An Example ... Single-Case Experimental Design ..... 747 John M. Ferron, Megan Kirby, and Lodi Lipien ... Case Studies in Neuropsychology ..... 789 Randi C. Martin, Simon Fischer-Baum, and Corinne M ...

  11. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  12. Psychology Case Study Examples: A Deep Dive into Real-life Scenarios

    Psychology case studies, for those unfamiliar with them, are in-depth investigations carried out to gain a profound understanding of the subject - whether it's an individual, group or phenomenon. They're powerful because they provide detailed insights that other research methods might miss.

  13. Case Study Research

    This method is useful for answering cause and effect questions (Davey, 1991 ). Case study research is personal, in-depth research. The concrete case, whether it is an individual, a group of individuals or a program, is bounded within social, political, cultural and historical contexts.

  14. Case Study Research: In-Depth Understanding in Context

    Abstract. This chapter explores case study as a major approach to research and evaluation. After first noting various contexts in which case studies are commonly used, the chapter focuses on case study research directly Strengths and potential problematic issues are outlined and then key phases of the process.

  15. Ch 2: Psychological Research Methods

    Psychologists use descriptive, experimental, and correlational methods to conduct research. Descriptive, or qualitative, methods include the case study, naturalistic observation, surveys, archival research, longitudinal research, and cross-sectional research. Experiments are conducted in order to determine cause-and-effect relationships.

  16. Introduction to Research Methods in Psychology

    An example of this type of research in psychology would be changing the length of a specific mental health treatment and measuring the effect on study participants. 2. Descriptive Research . Descriptive research seeks to depict what already exists in a group or population. Three types of psychology research utilizing this method are: Case studies

  17. Case study (psychology)

    Case study in psychology refers to the use of a descriptive research approach to obtain an in-depth analysis of a person, group, or phenomenon. A variety of techniques may be employed including personal interviews, direct-observation, psychometric tests, and archival records.In psychology case studies are most often used in clinical research to describe rare events and conditions, which ...

  18. Evaluating research methods in psychology: A case study approach

    This book illustrates and explains several important points and potential pitfalls in psychological research through a series of case studies. Each case describes a real piece of research, and asks you to consider whether the conclusions drawn are correct, or whether the results could be explained in some other way. The book is organized in much the same way as Rival Hypotheses (Huck & Sandler ...

  19. Case study methods.

    Case study research continues to be poorly understood. In psychology, as in sociology, anthropology, political science, and epidemiology, the strengths and weaknesses of case study research—much less how to practice it well—still need clarification. ... A. T. Panter, D. Rindskopf, & K. J. Sher (Eds.), APA handbook of research methods in ...

  20. Research in Psychology: Methods You Should Know

    Research in Psychology: The Basics. The first step in your review should include a basic introduction to psychology research methods. Psychology research can have a variety of goals. What researchers learn can be used to describe, explain, predict, or change human behavior. Psychologists use the scientific method to conduct studies and research ...

  21. Psychological Research Methods: Types and Tips

    Case Studies. A case study is a research method used in psychology to investigate an individual, group, or event in great detail. In a case study, the researcher gathers information from a variety of sources, including: Interviews; Observation; Document analysis; These methods allow researchers to gain an in-depth understanding of the case ...

  22. EVALUATING RESEARCH METHODS IN PSYCHOLOGY

    case studies, and the solutions follow in the third section. The fourth section recaps the core ideas that emerge through the case studies. Each case study is kept as short and as concise as possible. Each explains the background to a study, the relevant aspects of the method, and sets out a conclusion that could be drawn from the original paper.

  23. Research Methods in Psychology

    Learn how researchers in psychology conduct their studies and better appreciate and critique the research presented in news media, in other courses, or in the psychological research literature. ... Quantitative Research Methods. January 2023. Principles of design and ethics for research in psychology. Data Analysis for the Behavioral Sciences.

  24. Writing a Psychology Case Study: Mastering the Skill

    Benefits of Case Studies in Psychology. Psychological Case Studies have a few advantages if comparing this method with other investigation issues in this sphere: It provides a bright picture of the phenomenon, showing its nuances and specificity. It is quite easy to be carried out, especially in practical and ethical terms.

  25. Tips for Effective Case Formulations for Psychologists

    2. Use a Structured Framework. One well-regarded framework for case formulation is the 5P model, which includes the following elements: Presenting Problem: The client's primary issue or concern. ...

  26. Crafting Tempo and Timeframes in Qualitative Longitudinal Research

    When conducting QLR, time is the lens used to inform the overall study design and processes of data collection and analysis. While QLR is an evolving methodology, spanning diverse disciplines (Holland et al., 2006), a key feature is the collection of data on more than one occasion, often described as waves (Neale, 2021).Thus, researchers embarking on designing a new study need to consider ...

  27. Ecological Momentary Assessment (EMA)

    Research Questions: The choice of protocol should be guided by the research questions. If the study aims to understand the general flow of experiences throughout the day, time-based protocols might be suitable. If the goal is to investigate experiences related to specific events, an event-contingent protocol might be more appropriate.

  28. Community‐based participatory research for urban regeneration: Bridging

    3.1 The case study: Context and goals. The research was conducted as part of the 'History, Research, and People' project (funded by NextgenerationEU plan), with the primary goals of recovering the original design of the area, establishing a dedicated research space focused on urban green areas, and encouraging its utilization by both local ...

  29. Best practice assessment methods for the undergraduate psychology

    In Chapter 4 of Constructing Undergraduate Psychology Curricula Promoting Authentic Learning and Assessment in the Teaching of Psychology (Mayo, Citation 2010), the author explains how case studies in psychology programs can be used as forms of authentic assessment, aligning with the learning outcome of demonstrate critical thinking skills in ...

  30. Federal funding shapes knowledge in clinical science

    The synergy between clinical science and its public impact is echoed in the mission statement of the US National Institute of Mental Health (NIMH), which aims to transform the understanding and ...