quantitative nursing research article critique example

  • Subscribe to journal Subscribe
  • Get new issue alerts Get alerts

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

A guide to critical appraisal of evidence

Fineout-Overholt, Ellen PhD, RN, FNAP, FAAN

Ellen Fineout-Overholt is the Mary Coulter Dowdy Distinguished Professor of Nursing at the University of Texas at Tyler School of Nursing, Tyler, Tex.

The author has disclosed no financial relationships related to this article.

Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers successfully determine what is known about a clinical issue. Patient outcomes are improved when clinicians apply a body of evidence to daily practice.

How do nurses assess the quality of clinical research? This article outlines a stepwise approach to critical appraisal of research studies' worth to clinical practice: rapid critical appraisal, evaluation, synthesis, and recommendation. When critical care nurses apply a body of valid, reliable, and applicable evidence to daily practice, patient outcomes are improved.

FU1-4

Critical care nurses can best explain the reasoning for their clinical actions when they understand the worth of the research supporting their practices. In c ritical appraisal , clinicians assess the worth of research studies to clinical practice. Given that achieving improved patient outcomes is the reason patients enter the healthcare system, nurses must be confident their care techniques will reliably achieve best outcomes.

Nurses must verify that the information supporting their clinical care is valid, reliable, and applicable. Validity of research refers to the quality of research methods used, or how good of a job researchers did conducting a study. Reliability of research means similar outcomes can be achieved when the care techniques of a study are replicated by clinicians. Applicability of research means it was conducted in a similar sample to the patients for whom the findings will be applied. These three criteria determine a study's worth in clinical practice.

Appraising the worth of research requires a standardized approach. This approach applies to both quantitative research (research that deals with counting things and comparing those counts) and qualitative research (research that describes experiences and perceptions). The word critique has a negative connotation. In the past, some clinicians were taught that studies with flaws should be discarded. Today, it is important to consider all valid and reliable research informative to what we understand as best practice. Therefore, the author developed the critical appraisal methodology that enables clinicians to determine quickly which evidence is worth keeping and which must be discarded because of poor validity, reliability, or applicability.

Evidence-based practice process

The evidence-based practice (EBP) process is a seven-step problem-solving approach that begins with data gathering (see Seven steps to EBP ). During daily practice, clinicians gather data supporting inquiry into a particular clinical issue (Step 0). The description is then framed as an answerable question (Step 1) using the PICOT question format ( P opulation of interest; I ssue of interest or intervention; C omparison to the intervention; desired O utcome; and T ime for the outcome to be achieved). 1 Consistently using the PICOT format helps ensure that all elements of the clinical issue are covered. Next, clinicians conduct a systematic search to gather data answering the PICOT question (Step 2). Using the PICOT framework, clinicians can systematically search multiple databases to find available studies to help determine the best practice to achieve the desired outcome for their patients. When the systematic search is completed, the work of critical appraisal begins (Step 3). The known group of valid and reliable studies that answers the PICOT question is called the body of evidence and is the foundation for the best practice implementation (Step 4). Next, clinicians evaluate integration of best evidence with clinical expertise and patient preferences and values to determine if the outcomes in the studies are realized in practice (Step 5). Because healthcare is a community of practice, it is important that experiences with evidence implementation be shared, whether the outcome is what was expected or not. This enables critical care nurses concerned with similar care issues to better understand what has been successful and what has not (Step 6).

Critical appraisal of evidence

The first phase of critical appraisal, rapid critical appraisal, begins with determining which studies will be kept in the body of evidence. All valid, reliable, and applicable studies on the topic should be included. This is accomplished using design-specific checklists with key markers of good research. When clinicians determine a study is one they want to keep (a “keeper” study) and that it belongs in the body of evidence, they move on to phase 2, evaluation. 2

In the evaluation phase, the keeper studies are put together in a table so that they can be compared as a body of evidence, rather than individual studies. This phase of critical appraisal helps clinicians identify what is already known about a clinical issue. In the third phase, synthesis, certain data that provide a snapshot of a particular aspect of the clinical issue are pulled out of the evaluation table to showcase what is known. These snapshots of information underpin clinicians' decision-making and lead to phase 4, recommendation. A recommendation is a specific statement based on the body of evidence indicating what should be done—best practice. Critical appraisal is not complete without a specific recommendation. Each of the phases is explained in more detail below.

Phase 1: Rapid critical appraisal . Rapid critical appraisal involves using two tools that help clinicians determine if a research study is worthy of keeping in the body of evidence. The first tool, General Appraisal Overview for All Studies (GAO), covers the basics of all research studies (see Elements of the General Appraisal Overview for All Studies ). Sometimes, clinicians find gaps in knowledge about certain elements of research studies (for example, sampling or statistics) and need to review some content. Conducting an internet search for resources that explain how to read a research paper, such as an instructional video or step-by-step guide, can be helpful. Finding basic definitions of research methods often helps resolve identified gaps.

To accomplish the GAO, it is best to begin with finding out why the study was conducted and how it answers the PICOT question (for example, does it provide information critical care nurses want to know from the literature). If the study purpose helps answer the PICOT question, then the type of study design is evaluated. The study design is compared with the hierarchy of evidence for the type of PICOT question. The higher the design falls within the hierarchy or levels of evidence, the more confidence nurses can have in its finding, if the study was conducted well. 3,4 Next, find out what the researchers wanted to learn from their study. These are called the research questions or hypotheses. Research questions are just what they imply; insufficient information from theories or the literature are available to guide an educated guess, so a question is asked. Hypotheses are reasonable expectations guided by understanding from theory and other research that predicts what will be found when the research is conducted. The research questions or hypotheses provide the purpose of the study.

Next, the sample size is evaluated. Expectations of sample size are present for every study design. As an example, consider as a rule that quantitative study designs operate best when there is a sample size large enough to establish that relationships do not exist by chance. In general, the more participants in a study, the more confidence in the findings. Qualitative designs operate best with fewer people in the sample because these designs represent a deeper dive into the understanding or experience of each person in the study. 5 It is always important to describe the sample, as clinicians need to know if the study sample resembles their patients. It is equally important to identify the major variables in the study and how they are defined because this helps clinicians best understand what the study is about.

The final step in the GAO is to consider the analyses that answer the study research questions or confirm the study hypothesis. This is another opportunity for clinicians to learn, as learning about statistics in healthcare education has traditionally focused on conducting statistical tests as opposed to interpreting statistical tests. Understanding what the statistics indicate about the study findings is an imperative of critical appraisal of quantitative evidence.

The second tool is one of the variety of rapid critical appraisal checklists that speak to validity, reliability, and applicability of specific study designs, which are available at varying locations (see Critical appraisal resources ). When choosing a checklist to implement with a group of critical care nurses, it is important to verify that the checklist is complete and simple to use. Be sure to check that the checklist has answers to three key questions. The first question is: Are the results of the study valid? Related subquestions should help nurses discern if certain markers of good research design are present within the study. For example, identifying that study participants were randomly assigned to study groups is an essential marker of good research for a randomized controlled trial. Checking these essential markers helps clinicians quickly review a study to check off these important requirements. Clinical judgment is required when the study lacks any of the identified quality markers. Clinicians must discern whether the absence of any of the essential markers negates the usefulness of the study findings. 6-9

TU1

The second question is: What are the study results? This is answered by reviewing whether the study found what it was expecting to and if those findings were meaningful to clinical practice. Basic knowledge of how to interpret statistics is important for understanding quantitative studies, and basic knowledge of qualitative analysis greatly facilitates understanding those results. 6-9

The third question is: Are the results applicable to my patients? Answering this question involves consideration of the feasibility of implementing the study findings into the clinicians' environment as well as any contraindication within the clinicians' patient populations. Consider issues such as organizational politics, financial feasibility, and patient preferences. 6-9

When these questions have been answered, clinicians must decide about whether to keep the particular study in the body of evidence. Once the final group of keeper studies is identified, clinicians are ready to move into the phase of critical appraisal. 6-9

Phase 2: Evaluation . The goal of evaluation is to determine how studies within the body of evidence agree or disagree by identifying common patterns of information across studies. For example, an evaluator may compare whether the same intervention is used or if the outcomes are measured in the same way across all studies. A useful tool to help clinicians accomplish this is an evaluation table. This table serves two purposes: first, it enables clinicians to extract data from the studies and place the information in one table for easy comparison with other studies; and second, it eliminates the need for further searching through piles of periodicals for the information. (See Bonus Content: Evaluation table headings .) Although the information for each of the columns may not be what clinicians consider as part of their daily work, the information is important for them to understand about the body of evidence so that they can explain the patterns of agreement or disagreement they identify across studies. Further, the in-depth understanding of the body of evidence from the evaluation table helps with discussing the relevant clinical issue to facilitate best practice. Their discussion comes from a place of knowledge and experience, which affords the most confidence. The patterns and in-depth understanding are what lead to the synthesis phase of critical appraisal.

The key to a successful evaluation table is simplicity. Entering data into the table in a simple, consistent manner offers more opportunity for comparing studies. 6-9 For example, using abbreviations versus complete sentences in all columns except the final one allows for ease of comparison. An example might be the dependent variable of depression defined as “feelings of severe despondency and dejection” in one study and as “feeling sad and lonely” in another study. 10 Because these are two different definitions, they need to be different dependent variables. Clinicians must use their clinical judgment to discern that these different dependent variables require different names and abbreviations and how these further their comparison across studies.

TU2

Sample and theoretical or conceptual underpinnings are important to understanding how studies compare. Similar samples and settings across studies increase agreement. Several studies with the same conceptual framework increase the likelihood of common independent variables and dependent variables. The findings of a study are dependent on the analyses conducted. That is why an analysis column is dedicated to recording the kind of analysis used (for example, the name of the statistical analyses for quantitative studies). Only statistics that help answer the clinical question belong in this column. The findings column must have a result for each of the analyses listed; however, in the actual results, not in words. For example, a clinician lists a t -test as a statistic in the analysis column, so a t -value should reflect whether the groups are different as well as probability ( P -value or confidence interval) that reflects statistical significance. The explanation for these results would go in the last column that describes worth of the research to practice. This column is much more flexible and contains other information such as the level of evidence, the studies' strengths and limitations, any caveats about the methodology, or other aspects of the study that would be helpful to its use in practice. The final piece of information in this column is a recommendation for how this study would be used in practice. Each of the studies in the body of evidence that addresses the clinical question is placed in one evaluation table to facilitate the ease of comparing across the studies. This comparison sets the stage for synthesis.

Phase 3: Synthesis . In the synthesis phase, clinicians pull out key information from the evaluation table to produce a snapshot of the body of evidence. A table also is used here to feature what is known and help all those viewing the synthesis table to come to the same conclusion. A hypothetical example table included here demonstrates that a music therapy intervention is effective in reducing the outcome of oxygen saturation (SaO 2 ) in six of the eight studies in the body of evidence that evaluated that outcome (see Sample synthesis table: Impact on outcomes ). Simply using arrows to indicate effect offers readers a collective view of the agreement across studies that prompts action. Action may be to change practice, affirm current practice, or conduct research to strengthen the body of evidence by collaborating with nurse scientists.

When synthesizing evidence, there are at least two recommended synthesis tables, including the level-of-evidence table and the impact-on-outcomes table for quantitative questions, such as therapy or relevant themes table for “meaning” questions about human experience. (See Bonus Content: Level of evidence for intervention studies: Synthesis of type .) The sample synthesis table also demonstrates that a final column labeled synthesis indicates agreement across the studies. Of the three outcomes, the most reliable for clinicians to see with music therapy is SaO 2 , with positive results in six out of eight studies. The second most reliable outcome would be reducing increased respiratory rate (RR). Parental engagement has the least support as a reliable outcome, with only two of five studies showing positive results. Synthesis tables make the recommendation clear to all those who are involved in caring for that patient population. Although the two synthesis tables mentioned are a great start, the evidence may require more synthesis tables to adequately explain what is known. These tables are the foundation that supports clinically meaningful recommendations.

Phase 4: Recommendation . Recommendations are definitive statements based on what is known from the body of evidence. For example, with an intervention question, clinicians should be able to discern from the evidence if they will reliably get the desired outcome when they deliver the intervention as it was in the studies. In the sample synthesis table, the recommendation would be to implement the music therapy intervention across all settings with the population, and measure SaO 2 and RR, with the expectation that both would be optimally improved with the intervention. When the synthesis demonstrates that studies consistently verify an outcome occurs as a result of an intervention, however that intervention is not currently practiced, care is not best practice. Therefore, a firm recommendation to deliver the intervention and measure the appropriate outcomes must be made, which concludes critical appraisal of the evidence.

A recommendation that is off limits is conducting more research, as this is not the focus of clinicians' critical appraisal. In the case of insufficient evidence to make a recommendation for practice change, the recommendation would be to continue current practice and monitor outcomes and processes until there are more reliable studies to be added to the body of evidence. Researchers who use the critical appraisal process may indeed identify gaps in knowledge, research methods, or analyses, for example, that they then recommend studies that would fill in the identified gaps. In this way, clinicians and nurse scientists work together to build relevant, efficient bodies of evidence that guide clinical practice.

Evidence into action

Critical appraisal helps clinicians understand the literature so they can implement it. Critical care nurses have a professional and ethical responsibility to make sure their care is based on a solid foundation of available evidence that is carefully appraised using the phases outlined here. Critical appraisal allows for decision-making based on evidence that demonstrates reliable outcomes. Any other approach to the literature is likely haphazard and may lead to misguided care and unreliable outcomes. 11 Evidence translated into practice should have the desired outcomes and their measurement defined from the body of evidence. It is also imperative that all critical care nurses carefully monitor care delivery outcomes to establish that best outcomes are sustained. With the EBP paradigm as the basis for decision-making and the EBP process as the basis for addressing clinical issues, critical care nurses can improve patient, provider, and system outcomes by providing best care.

Seven steps to EBP

Step 0–A spirit of inquiry to notice internal data that indicate an opportunity for positive change.

Step 1– Ask a clinical question using the PICOT question format.

Step 2–Conduct a systematic search to find out what is already known about a clinical issue.

Step 3–Conduct a critical appraisal (rapid critical appraisal, evaluation, synthesis, and recommendation).

Step 4–Implement best practices by blending external evidence with clinician expertise and patient preferences and values.

Step 5–Evaluate evidence implementation to see if study outcomes happened in practice and if the implementation went well.

Step 6–Share project results, good or bad, with others in healthcare.

Adapted from: Steps of the evidence-based practice (EBP) process leading to high-quality healthcare and best patient outcomes. © Melnyk & Fineout-Overholt, 2017. Used with permission.

Critical appraisal resources

  • The Joanna Briggs Institute http://joannabriggs.org/research/critical-appraisal-tools.html
  • Critical Appraisal Skills Programme (CASP) www.casp-uk.net/casp-tools-checklists
  • Center for Evidence-Based Medicine www.cebm.net/critical-appraisal
  • Melnyk BM, Fineout-Overholt E. Evidence-Based Practice in Nursing and Healthcare: A Guide to Best Practice . 3rd ed. Philadelphia, PA: Wolters Kluwer; 2015.

A full set of critical appraisal checklists are available in the appendices.

Bonus content!

This article includes supplementary online-exclusive material. Visit the online version of this article at www.nursingcriticalcare.com to access this content.

critical appraisal; decision-making; evaluation of research; evidence-based practice; synthesis

  • + Favorites
  • View in Gallery

Readers Of this Article Also Read

Determining the level of evidence: experimental research appraisal, caring for hospitalized patients with alcohol withdrawal syndrome, the qt interval, evidence-based practice for red blood cell transfusions, searching with critical appraisal tools.

quantitative nursing research article critique example

Reading and critiquing a research article

Nurses use research to answer questions about their practice, solve problems, improve the quality of patient care, generate new research questions, and shape health policy. Nurses who confront questions about practice and policy need strong, high-quality, evidence-based research. Research articles in peer-reviewed journals typically undergo a rigorous review process to ensure scholarly standards are met. Nonetheless, standards vary among reviewers and journals. This article presents a framework nurses can use to read and critique a research article.

When deciding to read an article, determine if it’s about a question you have an interest in or if it can be of use in your practice. You may want to have a research article available to read and critique as you consider the following questions.

Does the title accurately describe the article?

A good title will pique your interest but typically you will not know until you are done reading the article if the title is an accurate description. An informative title conveys the article’s key concepts, methods, and variables.

Is the abstract representative of the article?

The abstract provides a brief overview of the purpose of the study, research questions, methods, results, and conclusions. This helps you decide if it’s an article you want to read. Some people use the abstract to discuss a study and never read further. This is unwise because the abstract is just a preview of the article and may be misleading.

Does the introduction make the purpose of the article clear?

A good introduction provides the basis for the article. It includes a statement of the problem, a rationale for the study, and the research questions. When a hypothesis is being tested, it should be clearly stated and include the expected results.

Is a theoretical framework described?

When a theoretical framework is used, it should inform the study and provide a rationale. The concepts of the theoretical framework should relate to the topic and serve as a basis for interpreting the results. Some research doesn’t use a theoretical framework, such as health services research, which examines issues such as access to care, healthcare costs, and healthcare delivery. Clinical research such as comparing the effectiveness of two drugs won’t include a theoretical framework.

Is the literature review relevant to the study and comprehensive? Does it include recent research?

The literature review provides a context for the study. It establishes what is, and is not known about the research problem. Publication dates are important but there are caveats. Most literature reviews include articles published within the last 3 to 5 years. It can take more than a year for an article to be reviewed, revised, accepted, and published, causing some references to seem outdated.

Literature reviews may include older studies to demonstrate important changes in knowledge over time. In an area of study where little or no research has been conducted, there may be only a few relevant articles that are a decade or more old. In an emerging area of study there may be no published research, in which case related research should be referenced. If you are familiar with the area of research, review the references to determine if well-known and highly regarded studies are included.

Does the methods section explain how a research question was addressed?

The methods section provides enough information to allow the study to be replicated. Components of this section indicate if the design is appropriate to answer the research question(s).

  • Did the researcher select the correct sample to answer the research questions and was the size sufficient to obtain valid results?
  • If a data collection instrument was used, how was it created and validated?
  • If any materials were used, such as written guides or equipment, were they described?
  • How were data collected?
  • Was reliability and validity accounted for?
  • Were the procedures listed in a step-by-step manner?

Independent and dependent variables should be described and terms defined. For example, if patient falls in the hospital are considered the dependent variable, or outcome, what are the independent variables, or factors, being investigated that may influence the rate at which patient falls occur? In this example, independent variables might include nurse staffing, registered nurse composition (such as education and certification), and hospital Magnet &#174 status.

Is the analytical approach consistent with the study questions and research design?

The analytical approach relates to the study questions and research design. A quantitative study may use descriptive statistics to summarize the data and other tests, such as chi squares, t-tests, or regression analysis, to compare or evaluate the data. A qualitative study may use such approaches as coding, content analysis, or grounded theory analysis. A reader who is unfamiliar with the analytical approach may choose to rely on the expertise of the journal’s peer reviewers who assessed whether the analytical approach was correct.

Are the results presented clearly in the text and in tables and figures?

Results should be clearly summarized in the text, tables, and figures. Tables and figures are only a partial representation of the results and critical information may be only in the text. In a quantitative study, the significance of the statistical tests is important. The presentation of qualitative results should avoid interpretation, which is reserved for the discussion.

Are the limitations presented and their implications discussed?

It is essential that the limitations of the study be presented. These are the factors that explain why the results may need to be carefully interpreted, may only be generalized to certain situations, or may provide less robust results than anticipated. Examples of limitations include a low response rate to a survey, not being able to establish causality when a cross-sectional study design was used, and having key stakeholders refuse to be interviewed.

Does the discussion explain the results in relation to the theoretical framework, research questions, and significance of the study?

The discussion serves as an opportunity to explain the results in respect to the research questions and the theoretical framework. Authors use the discussion to interpret the results and explain the meaning and significance of the study. It’s also important to distinguish the study from others that preceded it and provide recommendations for future research.

Depending on the research, it may be equally important for the investigators to present the clinical and/or practical significance of the results. Relevant policy recommendations are also important. Evaluate if the recommendations are supported by the data or seem to be more of an opinion. A succinct conclusion typically completes the article.

Once you’re done reading the article, how do you decide if the research is something you want to use?

Determine the scientific merit of the study by evaluating the level and quality of the evidence. There are many scales to use, several of which can be found in the Research Toolkit on the American Nurses Association’s website http://www.nursingworld.org/research-toolkit.aspx . Consider what you learned and decide if the study is relevant to your practice or answered your question as well as whether you can implement the findings.

A new skill

A systematic approach to reading and critiquing a research article serves as a foundation for translating evidence into practice and policy. Every nurse can acquire this skill.

Louise Kaplan is director of the nursing program at Saint Martin’s University in Lacey, Washington. At the end of this article is a checklist for evaluating an article.

Selected references

Hudson-Barr D. How to read a research article. J Spec Pediatr Nurs . 2004;9(2):70-2.

King’s College D. Leonard Corgan Library. Reading a research article. http://www.lib.jmu.edu/ilworkshop08/materials/studyguide3.pdf . Accessed September 5, 2012.

Oliver D, Mahon SM. Reading a research article part I: Types of variables. Clin J Oncol Nurs . 2005;9(1):110-12.

Oliver D, Mahon SM. Reading a research article part II: Parametric and nonparametric statistics. Clin J Oncol Nurs . 2005;9(2):238-240.

Oliver D, Mahon SM. Reading a research article part III: The data collection instrument. Clin J Oncol Nurs . 2006;10(3):423-26.

Rumrill P, Fitzgerald S, Ware, M. Guidelines for evaluating research articles. Work . 2000;14(3):257-63.

1. Critiquing the research article

b. Abstract summarizes the article

c. Introduction makes the purpose clear

d. Problem is properly introduced

e. Purpose of the study is explained

f. Research question(s) are clearly presented

g. Theoretical framework informs the research

h. Literature review is relevant, comprehensive, and includes recent research

i. Methods section details how the research questions were addressed or hypotheses were tested

j. Analysis is consistent with the study questions and research design

k. Results are clearly presented and statistics clearly explained

l. Discussion explains the results in relation to the theoretical framework, research questions, and significance to nursing

m. Limitations are presented and their implications discussed

n. Conclusion includes recommendations for nursing practice, future research, and policymakers

2. Determine the level and quality of the evidence using a scale (several can be found in ANA’s Research Toolkit http://www.nursingworld.org/Research-Toolkit/Appraising-the-Evidence ).

3. Decide if the study is applicable to your practice.

Comments are closed.

quantitative nursing research article critique example

NurseLine Newsletter

  • First Name *
  • Last Name *
  • Hidden Referrer

*By submitting your e-mail, you are opting in to receiving information from Healthcom Media and Affiliates. The details, including your email address/mobile number, may be used to keep you informed about future products and services.

Test Your Knowledge

Recent posts.

quantitative nursing research article critique example

Mentorship: A strategy for nursing retention

Anatomy of Writing

Anatomy of Writing for Publication for Nurses: The writing guide you’ve been looking for

quantitative nursing research article critique example

Writing retreats for nurses: Inspiration to share

quantitative nursing research article critique example

Adventures in lifelong learning

quantitative nursing research article critique example

Confidence is in

quantitative nursing research article critique example

FAQs: AI and prompt engineering

A nurse using inclusive, nonbiased language with a patient

How to strengthen your writing with inclusive, bias-free language

quantitative nursing research article critique example

Effective clinical learning for nursing students

A robotic hand points to a paper along with a human hand with a pen to illustrate AI for writing

AI takeaways from writing conference

urses working on a collaborative writing project

Tips for successful interprofessional collaborative writing: Part 2

quantitative nursing research article critique example

Nursing professional development at night

group of nurses working on a collaborative writing project

Tips for successful interprofessional collaborative writing: Part 1

quantitative nursing research article critique example

Engaging nurses in scholarly work

quantitative nursing research article critique example

Mentorship matters

quantitative nursing research article critique example

2023 nursing trends and salary survey results

quantitative nursing research article critique example

Javascript is currently disabled in your browser. Several features of this site will not function whilst javascript is disabled.

  • Why Publish With Us?
  • Editorial Policies
  • Author Guidelines
  • Peer Review Guidelines
  • Open Outlook
  • Submit New Manuscript

quantitative nursing research article critique example

  • Sustainability
  • Press Center
  • Testimonials
  • Favored Author Program
  • Permissions
  • Pre-Submission

Chinese website (中文网站)

open access to scientific and medical research

A part of Taylor & Francis Group

Back to Journals » Nursing: Research and Reviews » Volume 3

quantitative nursing research article critique example

Conducting an article critique for a quantitative research study: perspectives for doctoral students and other novice readers

  • Get Permission
  • Cite this article

Authors Vance DE   , Talley M , Azuero A , Pearce PF , Christian BJ

Received 29 January 2013

Accepted for publication 12 March 2013

Published 22 April 2013 Volume 2013:3 Pages 67—75

DOI https://doi.org/10.2147/NRR.S43374

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

David E Vance, 1 Michele Talley, 1 Andres Azuero, 1 Patricia F Pearce, 2 Becky J Christian 1 1 School of Nursing, University of Alabama at Birmingham, Birmingham, AL, USA; 2 Loyola University School of Nursing, New Orleans, LA, USA Abstract: The ability to critically evaluate the merits of a quantitative design research article is a necessary skill for practitioners and researchers of all disciplines, including nursing, in order to judge the integrity and usefulness of the evidence and conclusions made in an article. In general, this skill is automatic for many practitioners and researchers who already possess a good working knowledge of research methodology, including: hypothesis development, sampling techniques, study design, testing procedures and instrumentation, data collection and data management, statistics, and interpretation of findings. For graduate students and junior faculty who have yet to master these skills, completing a formally written article critique can be a useful process to hone such skills. However, a fundamental knowledge of research methods is still needed in order to be successful. Because there are few published examples of critique examples, this article provides the practical points of conducting a formally written quantitative research article critique while providing a brief example to demonstrate the principles and form. Keywords: quantitative article critique, statistics, methodology, graduate students

Creative Commons License

Contact Us   •   Privacy Policy   •   Associations & Partners   •   Testimonials   •   Terms & Conditions   •   Recommend this site •   Cookies •   Top

Contact Us   •   Privacy Policy

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals

You are here

  • Volume 18, Issue 3
  • Validity and reliability in quantitative studies
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Roberta Heale 1 ,
  • Alison Twycross 2
  • 1 School of Nursing, Laurentian University , Sudbury, Ontario , Canada
  • 2 Faculty of Health and Social Care , London South Bank University , London , UK
  • Correspondence to : Dr Roberta Heale, School of Nursing, Laurentian University, Ramsey Lake Road, Sudbury, Ontario, Canada P3E2C6; rheale{at}laurentian.ca

https://doi.org/10.1136/eb-2015-102129

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Evidence-based practice includes, in part, implementation of the findings of well-conducted quality research studies. So being able to critique quantitative research is an important skill for nurses. Consideration must be given not only to the results of the study but also the rigour of the research. Rigour refers to the extent to which the researchers worked to enhance the quality of the studies. In quantitative research, this is achieved through measurement of the validity and reliability. 1

  • View inline

Types of validity

The first category is content validity . This category looks at whether the instrument adequately covers all the content that it should with respect to the variable. In other words, does the instrument cover the entire domain related to the variable, or construct it was designed to measure? In an undergraduate nursing course with instruction about public health, an examination with content validity would cover all the content in the course with greater emphasis on the topics that had received greater coverage or more depth. A subset of content validity is face validity , where experts are asked their opinion about whether an instrument measures the concept intended.

Construct validity refers to whether you can draw inferences about test scores related to the concept being studied. For example, if a person has a high score on a survey that measures anxiety, does this person truly have a high degree of anxiety? In another example, a test of knowledge of medications that requires dosage calculations may instead be testing maths knowledge.

There are three types of evidence that can be used to demonstrate a research instrument has construct validity:

Homogeneity—meaning that the instrument measures one construct.

Convergence—this occurs when the instrument measures concepts similar to that of other instruments. Although if there are no similar instruments available this will not be possible to do.

Theory evidence—this is evident when behaviour is similar to theoretical propositions of the construct measured in the instrument. For example, when an instrument measures anxiety, one would expect to see that participants who score high on the instrument for anxiety also demonstrate symptoms of anxiety in their day-to-day lives. 2

The final measure of validity is criterion validity . A criterion is any other instrument that measures the same variable. Correlations can be conducted to determine the extent to which the different instruments measure the same variable. Criterion validity is measured in three ways:

Convergent validity—shows that an instrument is highly correlated with instruments measuring similar variables.

Divergent validity—shows that an instrument is poorly correlated to instruments that measure different variables. In this case, for example, there should be a low correlation between an instrument that measures motivation and one that measures self-efficacy.

Predictive validity—means that the instrument should have high correlations with future criterions. 2 For example, a score of high self-efficacy related to performing a task should predict the likelihood a participant completing the task.

Reliability

Reliability relates to the consistency of a measure. A participant completing an instrument meant to measure motivation should have approximately the same responses each time the test is completed. Although it is not possible to give an exact calculation of reliability, an estimate of reliability can be achieved through different measures. The three attributes of reliability are outlined in table 2 . How each attribute is tested for is described below.

Attributes of reliability

Homogeneity (internal consistency) is assessed using item-to-total correlation, split-half reliability, Kuder-Richardson coefficient and Cronbach's α. In split-half reliability, the results of a test, or instrument, are divided in half. Correlations are calculated comparing both halves. Strong correlations indicate high reliability, while weak correlations indicate the instrument may not be reliable. The Kuder-Richardson test is a more complicated version of the split-half test. In this process the average of all possible split half combinations is determined and a correlation between 0–1 is generated. This test is more accurate than the split-half test, but can only be completed on questions with two answers (eg, yes or no, 0 or 1). 3

Cronbach's α is the most commonly used test to determine the internal consistency of an instrument. In this test, the average of all correlations in every combination of split-halves is determined. Instruments with questions that have more than two responses can be used in this test. The Cronbach's α result is a number between 0 and 1. An acceptable reliability score is one that is 0.7 and higher. 1 , 3

Stability is tested using test–retest and parallel or alternate-form reliability testing. Test–retest reliability is assessed when an instrument is given to the same participants more than once under similar circumstances. A statistical comparison is made between participant's test scores for each of the times they have completed it. This provides an indication of the reliability of the instrument. Parallel-form reliability (or alternate-form reliability) is similar to test–retest reliability except that a different form of the original instrument is given to participants in subsequent tests. The domain, or concepts being tested are the same in both versions of the instrument but the wording of items is different. 2 For an instrument to demonstrate stability there should be a high correlation between the scores each time a participant completes the test. Generally speaking, a correlation coefficient of less than 0.3 signifies a weak correlation, 0.3–0.5 is moderate and greater than 0.5 is strong. 4

Equivalence is assessed through inter-rater reliability. This test includes a process for qualitatively determining the level of agreement between two or more observers. A good example of the process used in assessing inter-rater reliability is the scores of judges for a skating competition. The level of consistency across all judges in the scores given to skating participants is the measure of inter-rater reliability. An example in research is when researchers are asked to give a score for the relevancy of each item on an instrument. Consistency in their scores relates to the level of inter-rater reliability of the instrument.

Determining how rigorously the issues of reliability and validity have been addressed in a study is an essential component in the critique of research as well as influencing the decision about whether to implement of the study findings into nursing practice. In quantitative studies, rigour is determined through an evaluation of the validity and reliability of the tools or instruments utilised in the study. A good quality research study will provide evidence of how all these factors have been addressed. This will help you to assess the validity and reliability of the research and help you decide whether or not you should apply the findings in your area of clinical practice.

  • Lobiondo-Wood G ,
  • Shuttleworth M
  • ↵ Laerd Statistics . Determining the correlation coefficient . 2013 . https://statistics.laerd.com/premium/pc/pearson-correlation-in-spss-8.php

Twitter Follow Roberta Heale at @robertaheale and Alison Twycross at @alitwy

Competing interests None declared.

Read the full text or download the PDF:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Step-by-step guide to critiquing research. Part 1: quantitative research

Affiliation.

  • 1 School of Nursing and Midwifery, University of Dublin, Trinity College, Dublin.
  • PMID: 17577184
  • DOI: 10.12968/bjon.2007.16.11.23681

When caring for patients, it is essential that nurses are using the current best practice. To determine what this is, nurses must be able to read research critically. But for many qualified and student nurses, the terminology used in research can be difficult to understand, thus making critical reading even more daunting. It is imperative in nursing that care has its foundations in sound research, and it is essential that all nurses have the ability to critically appraise research to identify what is best practice. This article is a step-by-step approach to critiquing quantitative research to help nurses demystify the process and decode the terminology.

PubMed Disclaimer

Similar articles

  • Critiquing research for use in practice. Dale JC. Dale JC. J Pediatr Health Care. 2005 May-Jun;19(3):183-6. doi: 10.1016/j.pedhc.2005.02.004. J Pediatr Health Care. 2005. PMID: 15867836 No abstract available.
  • Step-by-step guide to critiquing research. Part 2: Qualitative research. Ryan F, Coughlan M, Cronin P. Ryan F, et al. Br J Nurs. 2007 Jun 28-Jul 11;16(12):738-44. doi: 10.12968/bjon.2007.16.12.23726. Br J Nurs. 2007. PMID: 17851363 Review.
  • Presenting research to clinicians: strategies for writing about research findings. Oermann MH, Galvin EA, Floyd JA, Roop JC. Oermann MH, et al. Nurse Res. 2006;13(4):66-74. doi: 10.7748/nr2006.07.13.4.66.c5990. Nurse Res. 2006. PMID: 16897941 Review.
  • Undertaking a literature review: a step-by-step approach. Cronin P, Ryan F, Coughlan M. Cronin P, et al. Br J Nurs. 2008 Jan 10-23;17(1):38-43. doi: 10.12968/bjon.2008.17.1.28059. Br J Nurs. 2008. PMID: 18399395 Review.
  • Reflections on how to write and organise a research thesis. Hardy S, Ramjeet J. Hardy S, et al. Nurse Res. 2005;13(2):27-39. doi: 10.7748/nr.13.2.27.s5. Nurse Res. 2005. PMID: 16416978 Review.
  • Mental distress among university students in the Eastern Cape Province, South Africa. Mutinta G. Mutinta G. BMC Psychol. 2022 Aug 18;10(1):204. doi: 10.1186/s40359-022-00903-8. BMC Psychol. 2022. PMID: 35982493 Free PMC article.
  • Health inequalities in post-conflict settings: A systematic review. Bwirire D, Crutzen R, Ntabe Namegabe E, Letschert R, de Vries N. Bwirire D, et al. PLoS One. 2022 Mar 14;17(3):e0265038. doi: 10.1371/journal.pone.0265038. eCollection 2022. PLoS One. 2022. PMID: 35286351 Free PMC article.
  • Describing the categories of people that contribute to an Emergency Centre crowd at Khayelitsha hospital, Western Cape, South Africa. Ahiable E, Lahri S, Bruijns S. Ahiable E, et al. Afr J Emerg Med. 2017 Jun;7(2):68-73. doi: 10.1016/j.afjem.2017.04.004. Epub 2017 Apr 20. Afr J Emerg Med. 2017. PMID: 30456111 Free PMC article.
  • Women's experiences with postpartum anxiety disorders: a narrative literature review. Ali E. Ali E. Int J Womens Health. 2018 May 29;10:237-249. doi: 10.2147/IJWH.S158621. eCollection 2018. Int J Womens Health. 2018. PMID: 29881312 Free PMC article. Review.
  • Barriers to successful implementation of prevention-of-mother-to-child-transmission (PMTCT) of HIV programmes in Malawi and Nigeria: a critical literature review study. Okoli JC, Lansdown GE. Okoli JC, et al. Pan Afr Med J. 2014 Oct 15;19:154. doi: 10.11604/pamj.2014.19.154.4225. eCollection 2014. Pan Afr Med J. 2014. PMID: 25767672 Free PMC article. Review.

Publication types

  • Search in MeSH

Related information

  • PubChem Compound
  • PubChem Substance

LinkOut - more resources

Full text sources.

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

This website is intended for healthcare professionals

British Journal of Nursing

  • { $refs.search.focus(); })" aria-controls="searchpanel" :aria-expanded="open" class="hidden lg:inline-flex justify-end text-gray-800 hover:text-primary py-2 px-4 lg:px-0 items-center text-base font-medium"> Search

Search menu

Barker J, Linsley P, Kane R, 3rd edn. London: Sage; 2016

Ethical guidelines for educational research. 2018; https://tinyurl.com/c84jm5rt

Bowling A Research methods in health, 4th edn. Maidenhead: Open University Press/McGraw-Hill Education; 2014

Gliner JA, Morgan GAMahwah (NJ): Lawrence Erlbaum Associates; 2000

Critical Skills Appraisal Programme checklists. 2021; https://casp-uk.net/casp-tools-checklists

Cresswell J, 4th edn. London: Sage; 2013

Grainger A Principles of temperature monitoring. Nurs Stand. 2013; 27:(50)48-55 https://doi.org/10.7748/ns2013.08.27.50.48.e7242

Jupp VLondon: Sage; 2006

Continuing professional development (CPD). 2021; http://www.hcpc-uk.org/cpd

London: NHS England; 2017 http://www.hee.nhs.uk/our-work/advanced-clinical-practice

Kennedy M, Burnett E Hand hygiene knowledge and attitudes: comparisons between student nurses. Journal of Infection Prevention. 2011; 12:(6)246-250 https://doi.org/10.1177/1757177411411124

Lindsay-Smith G, O'Sullivan G, Eime R, Harvey J, van Ufflen JGZ A mixed methods case study exploring the impact of membership of a multi-activity, multi-centre community group on the social wellbeing of older adults. BMC Geriatrics. 2018; 18 https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-018-0913-1.pdf

Morse JM, Pooler C, Vann-Ward T Awaiting diagnosis of breast cancer: strategies of enduring for preserving self. Oncology Nursing Forum. 2014; 41:(4)350-359 https://doi.org/10.1188/14.ONF.350-359

Revalidation. 2019; http://revalidation.nmc.org.uk

Parahoo K Nursing research, principles, processes and issues, 3rd edn. Basingstoke: Palgrave Macmillan; 2014

Polit DF, Beck CT Nursing research, 10th edn. Philadelphia (PA): Wolters Kluwer; 2017

Critiquing a published healthcare research paper

Angela Grainger

Nurse Lecturer/Scholarship Lead, BPP University, and editorial board member

View articles · Email Angela

quantitative nursing research article critique example

Research is defined as a ‘systematic inquiry using orderly disciplined methods to answer questions or to solve problems' ( Polit and Beck, 2017 :743). Research requires academic discipline coupled with specific research competencies so that an appropriate study is designed and conducted, leading to the drawing of relevant conclusions relating to the explicit aim/s of the study.

For those embarking on a higher degree such as a master's, taught doctorate, or a doctor of philosophy, the relationship between research, knowledge production and knowledge utilisation becomes clear during their research tuition and guidance from their research supervisor. But why should other busy practitioners juggling a work/home life balance find time to be interested in healthcare research? The answer lies in the relationship between the outcomes of research and its relationship to the determination of evidence-based practice (EBP).

The Health and Care Professions Council (HCPC) and the Nursing and Midwifery Council (NMC) require registered practitioners to keep their knowledge and skills up to date. This requirement incorporates being aware of the current EBP relevant to the registrant's field of practice, and to consider its application in relation to the decisions made in the delivery of patient care.

Register now to continue reading

Thank you for visiting British Journal of Nursing and reading some of our peer-reviewed resources for nurses. To read more, please register today. You’ll enjoy the following great benefits:

What's included

Limited access to clinical or professional articles

Unlimited access to the latest news, blogs and video content

Signing in with your registered email address

JavaScript seems to be disabled in your browser. For the best experience on our site, be sure to turn on Javascript in your browser.

Save 25% off with code BTC View Offers

  • New customer? Sign Up
  • Redeem Your Code
  • Compare Products

Nursing Research Critiques

A model for excellence.

Online Access Duration

Online Access*

Print Book Included

Downloadable Chapter PDFs

Instant Access

Ebook Purchase

Ebook Rental- 180 Day Access

Print Purchase

Upon Delivery**

*Online access provided on connect.springerpub.com select READ SAMPLE CHAPTER & BROWSE EBOOK to preview your experience

**Print books comes with an online access code inside the front cover that can be redeemed upon receipt

Karen Bauce, DNP, RN, MPA, NEA-BC

Joyce J. Fitzpatrick, PhD, MBA, RN, FAAN

This is the first resource to provide APRN students and practicing clinicians with a step-by-step guide to critically analyze evidence from research studies. As part of a profession that relies on best evidence, nurses need to be able to effectively assess research articles. Equipped with these skills, nurses will lead an informed practice and improve patient care.

With 14 qualitative and quantitative studies, chapters use previously published research articles to demonstrate the actual critique process. This text delves past outlining the elements of critique to teach by example, walking the reader through every part of a research article, from the title to the conclusion, and highlighting specific queries that need to be answered to craft a strong critique. The research articles in this book offer a broad range of clinical areas and diverse methodologies to highlight the fundamental differences between qualitative and quantitative studies, their underlying paradigms, and relative strengths and weaknesses. With a consistent, robust critiquing template, this content can easily be applied to countless additional research studies.

  • Comprises the only text to offer research critiques in nursing
  • Provides actual examples of critiques of published research papers by experienced nurse researchers and educators
  • Showcases a diverse range of research studies
  • Structures critiques consistently to enable replication of the process
  • Useful to hospitals, especially those with Magnet® certification

Contributors

Foreword Barbara Patterson, PhD, RN, ANEF

Introduction Karen Bauce and Joyce J. Fitzpatrick

Share Nursing Research Critique: A Model For Excellence

PART I: QUANTITATIVE STUDIES

1. Maternal and Paternal Knowledge and Perceptions Regarding Infant Pain in the NICU

Critique: Linda Cook, Anita Ayrandjian Volpe, and Karen Bauce

2. Cultural Competence and Psychological Empowerment Among Acute Care Nurses

Critique: Emerson E. Ea and Salena A. Gilles

3. Palauans Who Chew Betel Nut: Social Impact of Oral Disease

Critique: Anne Folte Fish

4. A Randomised Clinical Trial of the Effectiveness of Home-Based Health Care With Telemonitoring in Patients With COPD

Critique: Rebecca Witten Grizzle

5. Using Text Reminder to Improve Childhood Immunization Adherence in the Philippines

Critique: Margaret A. Harris and Karen Bauce

6. Nurse Caring Behaviors Following Implementation of a Relationship-Centered Care Professional Practice Model

Critique: Annette Peacock-Johnson and Patricia Keresztes

7. Impact of Health Care Information Technology on Nursing Practice

Critique: Elizabeth A. Madigan

8. Geriatric Nursing Home Falls: A Single Institution Cross-Sectional Study

Critique: Margaret McCarthy

9. Resilience and Professional Quality of Life Among Military Health Care Providers

Critique: Andrew P. Reimer

10. Evaluation of a Meditation Intervention to Reduce the Effects of Stressors Associated With Compassion Fatigue Among Nurses

Critique: Jacqueline Rhoads and Joyce J. Fitzpatrick

11. Patient Safety Culture and Nurse-Reported Adverse Events in Outpatient Hemodialysis Units

Critique: Julie Schexnayder, Mary A. Dolansky, and Karen Bauce

PART II: QUALITATIVE STUDIES

12. Hypertensive Black Men’s Perceptions of a Nurse Protocol for Medication Self-Administration

Critique: Deborah B. Fahs

13. Primary Care Experiences of People Who Live With Chronic Pain and Receive Opioids to Manage Pain: A Qualitative Methodology

Critique: Nadine M. Marchi

14. Older Adults’ Perceptions of Using iPads for Improving Fruit and Vegetable Intake: An Exploratory Study

Critique: Joseph D. Perazzo

15. Summary and Future Directions

Joyce J. Fitzpatrick

Karen Bauce, DNP, MPA, RN, NEA-BC, is associate dean for Graduate Online Programs and assistant clinical professor in the MSN Online Program at Sacred Heart University in Fairfield, Connecticut, where she teaches a variety of courses, including research.

Joyce J. Fitzpatrick , PhD, MBA, RN, FAAN, is Elizabeth Brooks Ford Professor of Nursing, Frances Payne Bolton School of Nursing, Case Western Reserve University, Cleveland, OH, where she was Dean from 1982 through 1997. She is also Professor, Department of Geriatrics, Mount Sinai School of Medicine, New York, NY. In 1990, Dr. Fitzpatrick received an honorary doctorate, Doctor of Humane Letters, from her alma mater, Georgetown University. In 2011 she received an honorary doctorate, Doctor of Humane Letters, from the Frontier University of Nursing. She has received numerous honors and awards; she was elected a Fellow in the American Academy of Nursing in 1981 and a Fellow in the National Academies of Practice in 1996. She received the American Journal of Nursing Book of the Year Award 18 times. Dr. Fitzpatrick is widely published in nursing and health care literature with over 300 publications. She served as co-editor of the Annual Review of Nursing Research series , vols. 1-26; she edits the journals Applied Nursing Research, Archives in Psychiatric Nursing, and Nursing Education Perspectives , the official journal of the National League for Nursing. She edited three editions of the classic Encyclopedia of Nursing Research (ENR) , and a series of nursing research digests

Springer Publishing Connect: Your new educator hub!

As of July, 2020 all educator resources have migrated to Springer Publishing CONNECT . Here, you will find a seamless experience that includes your books and their included resources all in one place, with one log in!

What actions do you need to take?

  • If you previously received access to educator resources on this website, you may have already received an email granting you Springer Publishing CONNECT access to our title you have adopted
  • If not, we're here to help! Email customer service at [email protected] and we'll set up your access on Springer Publishing CONNECT. Be sure to include the book title, edition number, ISBN, and "Instructor Resources‚" in the subject line. (Please allow 2-3 business days for this to be completed)
  • If this is a new request, click "Request Exam Copy" above and follow the instructions for access.

Nursing Research Critiques image

  • Release Date: February 16, 2018
  • Paperback / softback
  • Trim Size: 8.5in x 11in
  • ISBN: 9780826175090
  • eBook ISBN: 9780826175410

9780826195067.jpg

Pardon Our Interruption

As you were browsing something about your browser made us think you were a bot. There are a few reasons this might happen:

  • You've disabled JavaScript in your web browser.
  • You're a power user moving through this website with super-human speed.
  • You've disabled cookies in your web browser.
  • A third-party browser plugin, such as Ghostery or NoScript, is preventing JavaScript from running. Additional information is available in this support article .

To regain access, please make sure that cookies and JavaScript are enabled before reloading the page.

COMMENTS

  1. PDF Step'by-step guide to critiquing research. Part 1: quantitative research

    in nursing that care has its foundations in sound research and it is essential that all nurses have the ability to critically appraise research to identify what is best practice. This article is a step-by step-approach to critiquing quantitative research to help nurses demystify the process and decode the terminology. Key words: Quantitative ...

  2. How to appraise quantitative research

    Title, keywords and the authors. The title of a paper should be clear and give a good idea of the subject area. The title should not normally exceed 15 words 2 and should attract the attention of the reader. 3 The next step is to review the key words. These should provide information on both the ideas or concepts discussed in the paper and the ...

  3. PDF A Quantitative Research Critique Patricia Miller and Emily Gullena

    A Quantitative Research Critique. The purpose of this paper is to critique the research article, "The Use of Personal Digital. Assistants at the Point of Care in an Undergraduate Nursing Program", pu. shed in CIN:Computers, Informatics, Nursing (Goldsworthy, Lawrence, and Goodman, 2006). Nies. utilizing Nieswiadomy's research critique ...

  4. Conducting an article critique for a quantitative research study

    Annual Review of Nursing Research 2013(default):67-75 ... examples of critique examples, this article provides the practical points of conducting a formally written quantitative research article ...

  5. A guide to critical appraisal of evidence : Nursing2020 Critical Care

    Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers ...

  6. Reading and critiquing a research article

    Reading and critiquing a research article. October 11, 2012. Nurses use research to answer questions about their practice, solve problems, improve the quality of patient care, generate new research questions, and shape health policy. Nurses who confront questions about practice and policy need strong, high-quality, evidence-based research.

  7. Critiquing Quantitative Research Reports: Key Points for the Beginner

    The first step in the critique process is for the reader to browse the abstract and article for an overview. During this initial review a great deal of information can be obtained. The abstract should provide a clear, concise overview of the study. During this review it should be noted if the title, problem statement, and research question (or ...

  8. PDF Framework for How to Read and Critique a Research Study

    1. Critiquing the research article a. Title - Does it accurately describe the article? b. Abstract - Is it representative of the article? c. Introduction - Does it make the purpose of the article clear? d. Statement of the problem - Is the problem properly introduced? e. Purpose of the study - Has the reason for conducting the ...

  9. Conducting an article critique for a quantitative research study

    Nursing: Research and Reviews 2013:3 written quantitative article critique may be structured as a guide for a future critique. In this way, practical "realistic" points are provided. Balanced perspective on critiquing an article In general, a formally written article critique is usually only conducted as a didactic exercise for graduate ...

  10. Conducting an article critique for a quantitative research study

    However, a fundamental knowledge of research methods is still needed in order to be successful. Because there are few published examples of critique examples, this article provides the practical points of conducting a formally written quantitative research article critique while providing a brief example to demonstrate the principles and form.

  11. (PDF) Critiquing Nursing Research

    Defined. Critical evaluation/appraisal of research. studies through using specific criteria in. which the evaluator makes precise and. objective judgments about the research study. The word ...

  12. A guide to critiquing a research paper. Methodological appraisal of a

    Introduction. Developing and maintaining proficiency in critiquing research have become a core skill in today's evidence-based nursing. In addition, understanding, synthesising and critiquing research are fundamental parts of all nursing curricula at both pre- and post-registration levels (NMC, 2011).This paper presents a guide, which has potential utility in both practice and when undertaking ...

  13. NUR 440 Quantitative Critique

    NUR-440 Research and Evidence Based Practice. Professor Andrea Kaluza June 4, 2023. Quantitative Critique Quantitative research is an important part of research due to its design. The design of quantitative data gathers information based on factual evidence, statistics, and numerical data on a specific population in the study.

  14. Validity and reliability in quantitative studies

    Evidence-based practice includes, in part, implementation of the findings of well-conducted quality research studies. So being able to critique quantitative research is an important skill for nurses. Consideration must be given not only to the results of the study but also the rigour of the research. Rigour refers to the extent to which the researchers worked to enhance the quality of the studies.

  15. Step-by-step guide to critiquing research. Part 1: quantitative

    It is imperative in nursing that care has its foundations in sound research, and it is essential that all nurses have the ability to critically appraise research to identify what is best practice. This article is a step-by-step approach to critiquing quantitative research to help nurses demystify the process and decode the terminology.

  16. British Journal of Nursing

    Critiquing a published healthcare research paper. 25 March 2021. Advanced Clinical Practice. Angela Grainger. Nurse Lecturer/Scholarship Lead, BPP University, and editorial board member. 02 March 2021. Volume 30 · Issue 6. ISSN (print): 0966-0461. ISSN (online): 2052-2819.

  17. Nursing Research Critiques

    With 14 qualitative and quantitative studies, chapters use previously published research articles to demonstrate the actual critique process. This text delves past outlining the elements of critique to teach by example, walking the reader through every part of a research article, from the title to the conclusion, and highlighting specific ...

  18. PDF A Quantitative Study Research Critique

    Quantitative research is "concerned with objectivity, tight controls over the research situation, and the ability to generalize findings" (Nieswiadomy, 2007, pg 10). Qualitative research is. "concerned with the subjective meaning of an experience to an individual" (Nieswiadomy, 2007, pg. 11). A research critique "involves a thorough ...

  19. Nursing Research Critique #1 Sample Paper

    This is an example of how a critique for a peer-reviewed quantitative article should be done. It is quite very detailed as each section answers the questions ... Nursing Research Critique #1 Sample Paper. ... The theoretical framework presented in the research article states that the Nursing Education Simulation Framework aided in guiding and ...

  20. PDF Step-by-step guide to critiquing research. Part 2: quaiitative researcii

    research methods generate discrete ways of reasoning and distinct terminology; however, there are also many similarities within these methods. Because of this and its subjective nature, qualitative research it is often regarded as more difficult to critique. Nevertheless, an evidenced-based profession such as nursing cannot accept research at

  21. A Quantitative Research Critique Paper

    A Quantitative Research Critique Paper. Laura Bautista-Gomez Nova Southeastern University NUR 3050 Dr. Dalesandro March 08, 2021. A Quantitative Research Critique Paper When researchers have a certain problem, they will use quantitative research that will enable them to come to a solution. It uses an objective method that is designed to control the question with the ultimate goal of maximizing ...

  22. Nursing Research

    This video shows how to critique an article (not including the statistical analysis).https://patheyman.com/nursing/courses

  23. Quantitative Research Critique

    Quantitative Research Critique Quantitative research is an examination of data using methods to both identify and solve problems. Empirical evidence is collected and analyzed in a systematic fashion to produce resolutions to gaps in knowledge and practice (Polit & Beck, 2018, p. 41).

  24. Critiquing Quantitative vs ( Qualitative Research Articles

    Communications document from Southern New Hampshire University, 7 pages, Week 3 Assignment: Critiquing Quantitative or Qualitative Research Overview: This week's assignment asks the student to critique examples of published research. During a professional career, one will often be presented with research on a specific topic. A