Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • The 4 Types of Reliability in Research | Definitions & Examples

The 4 Types of Reliability in Research | Definitions & Examples

Published on August 8, 2019 by Fiona Middleton . Revised on June 22, 2023.

Reliability tells you how consistently a method measures something. When you apply the same method to the same sample under the same conditions, you should get the same results. If not, the method of measurement may be unreliable or bias may have crept into your research.

There are four main types of reliability. Each can be estimated by comparing different sets of results produced by the same method.

Type of reliability Measures the consistency of…
The same test over .
The same test conducted by different .
of a test which are designed to be equivalent.
The of a test.

Table of contents

Test-retest reliability, interrater reliability, parallel forms reliability, internal consistency, which type of reliability applies to my research, other interesting articles, frequently asked questions about types of reliability.

Test-retest reliability measures the consistency of results when you repeat the same test on the same sample at a different point in time. You use it when you are measuring something that you expect to stay constant in your sample.

Why it’s important

Many factors can influence your results at different points in time: for example, respondents might experience different moods, or external conditions might affect their ability to respond accurately.

Test-retest reliability can be used to assess how well a method resists these factors over time. The smaller the difference between the two sets of results, the higher the test-retest reliability.

How to measure it

To measure test-retest reliability, you conduct the same test on the same group of people at two different points in time. Then you calculate the correlation between the two sets of results.

Test-retest reliability example

You devise a questionnaire to measure the IQ of a group of participants (a property that is unlikely to change significantly over time).You administer the test two months apart to the same group of people, but the results are significantly different, so the test-retest reliability of the IQ questionnaire is low.

Improving test-retest reliability

  • When designing tests or questionnaires , try to formulate questions, statements, and tasks in a way that won’t be influenced by the mood or concentration of participants.
  • When planning your methods of data collection , try to minimize the influence of external factors, and make sure all samples are tested under the same conditions.
  • Remember that changes or recall bias can be expected to occur in the participants over time, and take these into account.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Interrater reliability (also called interobserver reliability) measures the degree of agreement between different people observing or assessing the same thing. You use it when data is collected by researchers assigning ratings, scores or categories to one or more variables , and it can help mitigate observer bias .

People are subjective, so different observers’ perceptions of situations and phenomena naturally differ. Reliable research aims to minimize subjectivity as much as possible so that a different researcher could replicate the same results.

When designing the scale and criteria for data collection, it’s important to make sure that different people will rate the same variable consistently with minimal bias . This is especially important when there are multiple researchers involved in data collection or analysis.

To measure interrater reliability, different researchers conduct the same measurement or observation on the same sample. Then you calculate the correlation between their different sets of results. If all the researchers give similar ratings, the test has high interrater reliability.

Interrater reliability example

A team of researchers observe the progress of wound healing in patients. To record the stages of healing, rating scales are used, with a set of criteria to assess various aspects of wounds. The results of different researchers assessing the same set of patients are compared, and there is a strong correlation between all sets of results, so the test has high interrater reliability.

Improving interrater reliability

  • Clearly define your variables and the methods that will be used to measure them.
  • Develop detailed, objective criteria for how the variables will be rated, counted or categorized.
  • If multiple researchers are involved, ensure that they all have exactly the same information and training.

Parallel forms reliability measures the correlation between two equivalent versions of a test. You use it when you have two different assessment tools or sets of questions designed to measure the same thing.

If you want to use multiple different versions of a test (for example, to avoid respondents repeating the same answers from memory), you first need to make sure that all the sets of questions or measurements give reliable results.

The most common way to measure parallel forms reliability is to produce a large set of questions to evaluate the same thing, then divide these randomly into two question sets.

The same group of respondents answers both sets, and you calculate the correlation between the results. High correlation between the two indicates high parallel forms reliability.

Parallel forms reliability example

A set of questions is formulated to measure financial risk aversion in a group of respondents. The questions are randomly divided into two sets, and the respondents are randomly divided into two groups. Both groups take both tests: group A takes test A first, and group B takes test B first. The results of the two tests are compared, and the results are almost identical, indicating high parallel forms reliability.

Improving parallel forms reliability

  • Ensure that all questions or test items are based on the same theory and formulated to measure the same thing.

Internal consistency assesses the correlation between multiple items in a test that are intended to measure the same construct.

You can calculate internal consistency without repeating the test or involving other researchers, so it’s a good way of assessing reliability when you only have one data set.

When you devise a set of questions or ratings that will be combined into an overall score, you have to make sure that all of the items really do reflect the same thing. If responses to different items contradict one another, the test might be unreliable.

Two common methods are used to measure internal consistency.

  • Average inter-item correlation : For a set of measures designed to assess the same construct, you calculate the correlation between the results of all possible pairs of items and then calculate the average.
  • Split-half reliability : You randomly split a set of measures into two sets. After testing the entire set on the respondents, you calculate the correlation between the two sets of responses.

Internal consistency example

A group of respondents are presented with a set of statements designed to measure optimistic and pessimistic mindsets. They must rate their agreement with each statement on a scale from 1 to 5. If the test is internally consistent, an optimistic respondent should generally give high ratings to optimism indicators and low ratings to pessimism indicators. The correlation is calculated between all the responses to the “optimistic” statements, but the correlation is very weak. This suggests that the test has low internal consistency.

Improving internal consistency

  • Take care when devising questions or measures: those intended to reflect the same concept should be based on the same theory and carefully formulated.

It’s important to consider reliability when planning your research design , collecting and analyzing your data, and writing up your research. The type of reliability you should calculate depends on the type of research  and your  methodology .

What is my methodology? Which form of reliability is relevant?
Measuring a property that you expect to stay the same over time. Test-retest
Multiple researchers making observations or ratings about the same topic. Interrater
Using two different tests to measure the same thing. Parallel forms
Using a multi-item test where all the items are intended to measure the same variable. Internal consistency

If possible and relevant, you should statistically calculate reliability and state this alongside your results .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

You can use several tactics to minimize observer bias .

  • Use masking (blinding) to hide the purpose of your study from all observers.
  • Triangulate your data with different data collection methods or sources.
  • Use multiple observers and ensure interrater reliability.
  • Train your observers to make sure data is consistently recorded between them.
  • Standardize your observation procedures to make sure they are structured and clear.

Reproducibility and replicability are related terms.

  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Research bias affects the validity and reliability of your research findings , leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where, for example, a new form of treatment may be evaluated.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Middleton, F. (2023, June 22). The 4 Types of Reliability in Research | Definitions & Examples. Scribbr. Retrieved September 16, 2024, from https://www.scribbr.com/methodology/types-of-reliability/

Is this article helpful?

Fiona Middleton

Fiona Middleton

Other students also liked, reliability vs. validity in research | difference, types and examples, what is quantitative research | definition, uses & methods, data collection | definition, methods & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals

You are here

  • Volume 18, Issue 2
  • Issues of validity and reliability in qualitative research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Helen Noble 1 ,
  • Joanna Smith 2
  • 1 School of Nursing and Midwifery, Queens's University Belfast , Belfast , UK
  • 2 School of Human and Health Sciences, University of Huddersfield , Huddersfield , UK
  • Correspondence to Dr Helen Noble School of Nursing and Midwifery, Queens's University Belfast, Medical Biology Centre, 97 Lisburn Rd, Belfast BT9 7BL, UK; helen.noble{at}qub.ac.uk

https://doi.org/10.1136/eb-2015-102054

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Evaluating the quality of research is essential if findings are to be utilised in practice and incorporated into care delivery. In a previous article we explored ‘bias’ across research designs and outlined strategies to minimise bias. 1 The aim of this article is to further outline rigour, or the integrity in which a study is conducted, and ensure the credibility of findings in relation to qualitative research. Concepts such as reliability, validity and generalisability typically associated with quantitative research and alternative terminology will be compared in relation to their application to qualitative research. In addition, some of the strategies adopted by qualitative researchers to enhance the credibility of their research are outlined.

Are the terms reliability and validity relevant to ensuring credibility in qualitative research?

Although the tests and measures used to establish the validity and reliability of quantitative research cannot be applied to qualitative research, there are ongoing debates about whether terms such as validity, reliability and generalisability are appropriate to evaluate qualitative research. 2–4 In the broadest context these terms are applicable, with validity referring to the integrity and application of the methods undertaken and the precision in which the findings accurately reflect the data, while reliability describes consistency within the employed analytical procedures. 4 However, if qualitative methods are inherently different from quantitative methods in terms of philosophical positions and purpose, then alterative frameworks for establishing rigour are appropriate. 3 Lincoln and Guba 5 offer alternative criteria for demonstrating rigour within qualitative research namely truth value, consistency and neutrality and applicability. Table 1 outlines the differences in terminology and criteria used to evaluate qualitative research.

  • View inline

Terminology and criteria used to evaluate the credibility of research findings

What strategies can qualitative researchers adopt to ensure the credibility of the study findings?

Unlike quantitative researchers, who apply statistical methods for establishing validity and reliability of research findings, qualitative researchers aim to design and incorporate methodological strategies to ensure the ‘trustworthiness’ of the findings. Such strategies include:

Accounting for personal biases which may have influenced findings; 6

Acknowledging biases in sampling and ongoing critical reflection of methods to ensure sufficient depth and relevance of data collection and analysis; 3

Meticulous record keeping, demonstrating a clear decision trail and ensuring interpretations of data are consistent and transparent; 3 , 4

Establishing a comparison case/seeking out similarities and differences across accounts to ensure different perspectives are represented; 6 , 7

Including rich and thick verbatim descriptions of participants’ accounts to support findings; 7

Demonstrating clarity in terms of thought processes during data analysis and subsequent interpretations 3 ;

Engaging with other researchers to reduce research bias; 3

Respondent validation: includes inviting participants to comment on the interview transcript and whether the final themes and concepts created adequately reflect the phenomena being investigated; 4

Data triangulation, 3 , 4 whereby different methods and perspectives help produce a more comprehensive set of findings. 8 , 9

Table 2 provides some specific examples of how some of these strategies were utilised to ensure rigour in a study that explored the impact of being a family carer to patients with stage 5 chronic kidney disease managed without dialysis. 10

Strategies for enhancing the credibility of qualitative research

In summary, it is imperative that all qualitative researchers incorporate strategies to enhance the credibility of a study during research design and implementation. Although there is no universally accepted terminology and criteria used to evaluate qualitative research, we have briefly outlined some of the strategies that can enhance the credibility of study findings.

  • Sandelowski M
  • Lincoln YS ,
  • Barrett M ,
  • Mayan M , et al
  • Greenhalgh T
  • Lingard L ,

Twitter Follow Joanna Smith at @josmith175 and Helen Noble at @helnoble

Competing interests None.

Read the full text or download the PDF:

Quality in qualitative research: Through the lens of validity, reliability and generalizability

Mohammad Hossein Jarrahi at University of North Carolina at Chapel Hill

  • University of North Carolina at Chapel Hill

Gemma Newlands at University of Amsterdam

  • University of Amsterdam

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Jeff Rose

  • Michael Barrett
  • Eur J Gen Pract

Irene Korstjens

  • Hine Funaki
  • Kathleen M. Eisenhardt
  • A.M. Huberman
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Eur J Gen Pract
  • v.24(1); 2018

Series: Practical guidance to qualitative research. Part 4: Trustworthiness and publishing

Irene korstjens.

a Faculty of Health Care, Research Centre for Midwifery Science, Zuyd University of Applied Sciences, Maastricht, The Netherlands;

Albine Moser

b Faculty of Health Care, Research Centre Autonomy and Participation of Chronically Ill People, Zuyd University of Applied Sciences, Heerlen, The Netherlands;

c Faculty of Health, Medicine and Life Sciences, Department of Family Medicine, Maastricht University, Maastricht, The Netherlands

In the course of our supervisory work over the years we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting high-quality qualitative research in primary care. By ‘novice’ we mean Master’s students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of qualitative research papers. The first article provides an introduction to this series. The second article focused on context, research questions and designs. The third article focused on sampling, data collection and analysis. This fourth article addresses FAQs about trustworthiness and publishing. Quality criteria for all qualitative research are credibility, transferability, dependability, and confirmability. Reflexivity is an integral part of ensuring the transparency and quality of qualitative research. Writing a qualitative research article reflects the iterative nature of the qualitative research process: data analysis continues while writing. A qualitative research article is mostly narrative and tends to be longer than a quantitative paper, and sometimes requires a different structure. Editors essentially use the criteria: is it new, is it true, is it relevant? An effective cover letter enhances confidence in the newness, trueness and relevance, and explains why your study required a qualitative design. It provides information about the way you applied quality criteria or a checklist, and you can attach the checklist to the manuscript.

Key points on trustworthiness and publishing

  • The quality criteria for all qualitative research are credibility, transferability, dependability, and confirmability.
  • In addition, reflexivity is an integral part of ensuring the transparency and quality of qualitative research.
  • Writing a qualitative article reflects the iterative nature of the qualitative research process: continuous data analysis continues with simultaneous fine-tuning.
  • Editors essentially use the criteria: is it new, is it true, and is it relevant?
  • An effective cover letter enhances confidence in the newness, trueness and relevance, and explains why your study required a qualitative design.

Introduction

This article is the fourth and last in a series of four articles aiming to provide practical guidance for qualitative research. In an introductory paper, we have described the objective, nature and outline of the series [ 1 ]. Part 2 of the series focused on context, research questions and design of qualitative research [ 2 ], whereas Part 3 concerned sampling, data collection and analysis [ 3 ]. In this paper Part 4, we address frequently asked questions (FAQs) about two overarching themes: trustworthiness and publishing.

Trustworthiness

What are the quality criteria for qualitative research.

The same quality criteria apply to all qualitative designs, including the ‘big three’ approaches. Quality criteria used in quantitative research, e.g. internal validity, generalizability, reliability, and objectivity, are not suitable to judge the quality of qualitative research. Qualitative researchers speak of trustworthiness, which simply poses the question ‘Can the findings to be trusted?’ [ 4 ]. Several definitions and criteria of trustworthiness exist (see Box 1 ) [ 2 ], but the best-known criteria are credibility, transferability, dependability, and confirmability as defined by Lincoln and Guba [ 4 ].

CredibilityThe confidence that can be placed in the truth of the research findings. Credibility establishes whether the research findings represent plausible information drawn from the participants’ original data and is a correct interpretation of the participants’ original views.
TransferabilityThe degree to which the results of qualitative research can be transferred to other contexts or settings with other respondents. The researcher facilitates the transferability judgment by a potential user through thick description.
DependabilityThe stability of findings over time. Dependability involves participants’ evaluation of the findings, interpretation and recommendations of the study such that all are supported by the data as received from participants of the study.
ConfirmabilityThe degree to which the findings of the research study could be confirmed by other researchers. Confirmability is concerned with establishing that data and interpretations of the findings are not figments of the inquirer’s imagination, but clearly derived from the data.
ReflexivityThe process of critical self-reflection about oneself as researcher (own biases, preferences, preconceptions), and the research relationship (relationship to the respondent, and how the relationship affects participant’s answers to questions).

Trustworthiness: definitions of quality criteria in qualitative research. Based on Lincoln and Guba [ 4 ].

What is credibility and what strategies can be used to ensure it?

Credibility is the equivalent of internal validity in quantitative research and is concerned with the aspect of truth-value [ 4 ]. Strategies to ensure credibility are prolonged engagement, persistent observation, triangulation and member check ( Box 2 ). When you design your study, you also determine which of these strategies you will use, because not all strategies might be suitable. For example, a member check of written findings might not be possible for study participants with a low level of literacy. Let us give an example of the possible use of strategies to ensure credibility. A team of primary care researchers studied the process by which people with type 2 diabetes mellitus try to master diabetes self-management [ 6 ]. They used the grounded theory approach, and their main finding was an explanatory theory. The researchers ensured credibility by using the following strategies.

CriterionStrategyDefinition
CredibilityProlonged engagementLasting presence during observation of long interviews or long-lasting engagement in the field with participants. Investing sufficient time to become familiar with the setting and context, to test for misinformation, to build trust, and to get to know the data to get rich data.
Persistent observationIdentifying those characteristics and elements that are most relevant to the problem or issue under study, on which you will focus in detail.
TriangulationUsing different data sources, investigators and methods of data collection.
•  refers to using multiple data sources in time (gathering data in different times of the day or at different times in a year), space (collecting data on the same phenomenon in multiples sites or test for cross-site consistency) and person (gathering data from different types or level of people e.g. individuals, their family members and clinicians).
•  is concerned with using two ore researchers to make coding, analysis and interpretation decisions.
•  means using multiple methods of data collection.
Member checkFeeding back data, analytical categories, interpretations and conclusions to members of those groups from whom the data were originally obtained. It strengthens the data, especially because researcher and respondents look at the data with different eyes.
TransferabilityThick descriptionDescribing not just the behaviour and experiences, but their context as well, so that the behaviour and experiences become meaningful to an outsider.
Dependability and confirmabilityAudit trailTransparently describing the research steps taken from the start of a research project to the development and reporting of the findings. The records of the research path are kept throughout the study.
ReflexivityDiaryExamining one’s own conceptual lens, explicit and implicit assumptions, preconceptions and values, and how these affect research decisions in all phases of qualitative studies.

Definition of strategies to ensure trustworthiness in qualitative research. Based on Lincoln and Guba [ 4 ]; Sim and Sharp [ 5 ].

Prolonged engagement . Several distinct questions were asked regarding topics related to mastery. Participants were encouraged to support their statements with examples, and the interviewer asked follow-up questions. The researchers studied the data from their raw interview material until a theory emerged to provide them with the scope of the phenomenon under study.

Triangulation . Triangulation aims to enhance the process of qualitative research by using multiple approaches [ 7 ]. Methodological triangulation was used by gathering data by means of different data collection methods such as in-depth interviews, focus group discussions and field notes. Investigator triangulation was applied by involving several researchers as research team members, and involving them in addressing the organizational aspects of the study and the process of analysis. Data were analysed by two different researchers. The first six interviews were analysed by them independently, after which the interpretations were compared. If their interpretations differed, they discussed them until the most suitable interpretation was found, which best represented the meaning of the data. The two researchers held regular meetings during the process of analysis (after analysing every third data set). In addition, regular analytical sessions were held with the research team. Data triangulation was secured by using the various data sets that emerged throughout the analysis process: raw material, codes, concepts and theoretical saturation.

Persistent observation . Developing the codes, the concepts and the core category helped to examine the characteristics of the data. The researchers constantly read and reread the data, analysed them, theorized about them and revised the concepts accordingly. They recoded and relabelled codes, concepts and the core category. The researchers studied the data until the final theory provided the intended depth of insight.

Member check . All transcripts of the interviews and focus group discussions were sent to the participants for feedback. In addition, halfway through the study period, a meeting was held with those who had participated in either the interviews or the focus group discussions, enabling them to correct the interpretation and challenge what they perceived to be ‘wrong’ interpretations. Finally, the findings were presented to the participants in another meeting to confirm the theory.

What does transferability mean and who makes a ‘transferability judgement’?

Transferability concerns the aspect of applicability [ 4 ]. Your responsibility as a researcher is to provide a ‘thick description’ of the participants and the research process, to enable the reader to assess whether your findings are transferable to their own setting; this is the so-called transferability judgement. This implies that the reader, not you, makes the transferability judgment because you do not know their specific settings.

In the aforementioned study on self-management of diabetes, the researchers provided a rich account of descriptive data, such as the context in which the research was carried out, its setting, sample, sample size, sample strategy, demographic, socio-economic, and clinical characteristics, inclusion and exclusion criteria, interview procedure and topics, changes in interview questions based on the iterative research process, and excerpts from the interview guide.

What is the difference between dependability and confirmability and why is an audit trail needed?

Dependability includes the aspect of consistency [ 4 ]. You need to check whether the analysis process is in line with the accepted standards for a particular design. Confirmability concerns the aspect of neutrality [ 4 ]. You need to secure the inter-subjectivity of the data. The interpretation should not be based on your own particular preferences and viewpoints but needs to be grounded in the data. Here, the focus is on the interpretation process embedded in the process of analysis. The strategy needed to ensure dependability and confirmability is known as an audit trail. You are responsible for providing a complete set of notes on decisions made during the research process, research team meetings, reflective thoughts, sampling, research materials adopted, emergence of the findings and information about the data management. This enables the auditor to study the transparency of the research path.

In the aforementioned study of diabetes self-management, a university-based auditor examined the analytical process, the records and the minutes of meetings for accuracy, and assessed whether all analytical techniques of the grounded theory methodology had been used accordingly. This auditor also reviewed the analysis, i.e. the descriptive, axial and selective codes, to see whether they followed from the data (raw data, analysis notes, coding notes, process notes, and report) and grounded in the data. The auditor who performed the dependability and confirmability audit was not part of the research team but an expert in grounded theory. The audit report was shared with all members of the research team.

Why is reflexivity an important quality criterion?

As a qualitative researcher, you have to acknowledge the importance of being self-aware and reflexive about your own role in the process of collecting, analysing and interpreting the data, and in the pre-conceived assumptions, you bring to your research [ 8 ]. Therefore, your interviews, observations, focus group discussions and all analytical data need to be supplemented with your reflexive notes. In the aforementioned study of diabetes self-management, the reflexive notes for an interview described the setting and aspects of the interview that were noted during the interview itself and while transcribing the audio tape and analysing the transcript. Reflexive notes also included the researcher’s subjective responses to the setting and the relationship with the interviewees.

How do I report my qualitative study?

The process of writing up your qualitative study reflects the iterative process of performing qualitative research. As you start your study, you make choices about the design, and as your study proceeds, you develop your design further. The same applies to writing your manuscript. First, you decide its structure, and during the process of writing, you adapt certain aspects. Moreover, while writing you are still analysing and fine-tuning your findings. The usual structure of articles is a structured abstract with subheadings, followed by the main text, structured in sections labelled Introduction-Methods-Results-Discussion. You might apply this structure loosely, for example renaming Results as Findings, but sometimes your specific study design requires a different structure. For example, an ethnographic study might use a narrative abstract and then start by describing a specific case, or combine the Findings and Discussion sections. A qualitative article is usually much longer (5000–7000 words) than quantitative articles, which often present their results in tables. You might present quantified characteristics of your participants in tables or running text, and you are likely to use boxes to present your interview guide or questioning route, or an overview of the main findings in categories, subcategories and themes. Most of your article is running text, providing a balanced presentation. You provide a thick description of the participants and the context, transparently describe and reflect on your methods, and do justice to the richness of your qualitative findings in reporting, interpreting and discussing them. Thus, the Methods and Findings sections will be much longer than in a quantitative paper.

The difference between reporting quantitative and qualitative research becomes most visible in the Results section. Quantitative articles have a strict division between the Results section, which presents the evidence, and the Discussion section. In contrast, the Findings section in qualitative papers consists mostly of synthesis and interpretation, often with links to empirical data. Quantitative and qualitative researchers alike, however, need to be concise in presenting the main findings to answer the research question, and avoid distractions. Therefore, you need to make choices to provide a comprehensive and balanced representation of your findings. Your main findings may consist, for example, of interpretations, relationships and themes, and your Findings section might include the development of a theory or model, or integration with earlier research or theory. You present evidence to substantiate your analytic findings. You use quotes or citations in the text, or field notes, text excerpts or photographs in boxes to illustrate and visualize the variety and richness of the findings.

Before you start preparing your article, it is wise to examine first the journal of your choice. You need to check its guidelines for authors and recommended sources for reference style, ethics, etc., as well as recently accepted qualitative manuscripts. More and more journals also refer to quality criteria lists for reporting qualitative research, and ask you to upload the checklist with your submission. Two of these checklists are available at http://www.equator-network.org/reporting-guidelines .

How do I select a potential journal for publishing my research?

Selecting a potential journal for publishing qualitative articles is not much different from the procedure used for quantitative articles. First, you consider your potential public and the healthcare settings, health problems, field, or research methodology you are focusing on. Next, you look for journals in the Journal Citation Index of Web of Science, consult other researchers and study the potential journals’ aims, scopes, and author guidelines. This also enables you to find out how open these journals are to publishing qualitative research and accepting articles with different designs, structures and lengths. If you are unsure whether the journal of your choice would accept qualitative research, you might contact the Editor in Chief. Lastly, you might look in your top three journals for qualitative articles, and try to decide how your manuscript would fit in. The author guidelines and examples of manuscripts will support you during your writing, and your top three offers alternatives in case you need to turn to another journal.

What are the journal editors’ considerations in accepting a qualitative manuscript?

Your article should effectively present high-quality research and should adhere to the journal’s guidelines. Editors essentially use the same criteria for qualitative articles as for quantitative articles: Is it new, it is true, is it relevant? However, editors may use—implicitly or explicitly—the level-of-evidence pyramid, with qualitative research positioned in the lower ranks. Moreover, many medical journal editors will be more familiar with quantitative designs than with qualitative work.

Therefore, you need to put some extra effort in your cover letter to the editor, to enhance their confidence in the newness, trueness and relevance, and the quality of your work. It is of the utmost importance that you explain in your cover letter why your study required a qualitative design, and probably more words than usual. If you need to deviate from the usual structure, you have to explain why. To enhance confidence in the quality of your work, you should explain how you applied quality criteria or refer to the checklist you used ( Boxes 2 and ​ and3). 3 ). You might even attach the checklist as additional information to the manuscript. You might also request that the Editor-in-Chief invites at least one reviewer who is familiar with qualitative research.

Standards for reporting qualitative research (SRQR)Consolidated criteria for reporting qualitative research (COREQ)
All aspects of qualitative studies.Qualitative studies focusing on in-depth interviews and focus groups.
21 items for: title, abstract, introduction, methods, results/findings, discussion, conflicts of interest, and funding.32 items for: research team and reflexivity, study design, data analysis, and reporting.

Quality criteria checklists for reporting qualitative research. Based on O’Brien et al. [ 9 ]; Tong et al. [ 10 ].

Acknowledgements

The authors wish to thank the following junior researchers who have been participating for the last few years in the so-called ‘Think tank on qualitative research’ project, a collaborative project between Zuyd University of Applied Sciences and Maastricht University, for their pertinent questions: Erica Baarends, Jerome van Dongen, Jolanda Friesen-Storms, Steffy Lenzen, Ankie Hoefnagels, Barbara Piskur, Claudia van Putten-Gamel, Wilma Savelberg, Steffy Stans, and Anita Stevens. The authors are grateful to Isabel van Helmond, Joyce Molenaar and Darcy Ummels for proofreading our manuscripts and providing valuable feedback from the ‘novice perspective’.

Disclosure statement

The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Internal Validity vs. External Validity in Research

What they tell us about the meaningfulness and trustworthiness of research

Verywell / Bailey Mariner

  • Internal Validity
  • External Validity

How do you determine whether a psychology study is trustworthy and meaningful? Two characteristics that can help you assess research findings are internal and external validity.

  • Internal validity measures how well a study is conducted (its structure) and how accurately its results reflect the studied group.
  • External validity relates to how applicable the findings are in the real world.

These two concepts help researchers gauge if the results of a research study are trustworthy and meaningful.

Conclusions are warranted

Controls extraneous variables

Eliminates alternative explanations

Focus on accuracy and strong research methods

Findings can be generalized

Outcomes apply to practical situations

Results apply to the world at large

Results can be translated into another context

What Is Internal Validity in Research?

Internal validity is the extent to which a research study establishes a trustworthy cause-and-effect relationship. This type of validity depends largely on the study's procedures and how rigorously it is performed.

Internal validity is important because once established, it makes it possible to eliminate alternative explanations for a finding. If you implement a smoking cessation program, for instance, internal validity ensures that any improvement in the subjects is due to the treatment administered and not something else.

Internal validity is not a "yes or no" concept. Instead, we consider how confident we can be with study findings based on whether the research avoids traps that may make those findings questionable. The less chance there is for "confounding," the higher the internal validity and the more confident we can be.

Confounding refers to uncontrollable variables that come into play and can confuse the outcome of a study, making us unsure of whether we can trust that we have identified the cause-and-effect relationship.

In short, you can only be confident that a study is internally valid if you can rule out alternative explanations for the findings. Three criteria are required to assume cause and effect in a research study:

  • The cause preceded the effect in terms of time.
  • The cause and effect vary together.
  • There are no other likely explanations for the relationship observed.

Factors That Improve Internal Validity

To ensure the internal validity of a study, you want to consider aspects of the research design that will increase the likelihood that you can reject alternative hypotheses. Many factors can improve internal validity in research, including:

  • Blinding : Participants—and sometimes researchers—are unaware of what intervention they are receiving (such as using a placebo on some subjects in a medication study) to avoid having this knowledge bias their perceptions and behaviors, thus impacting the study's outcome
  • Experimental manipulation : Manipulating an independent variable in a study (for instance, giving smokers a cessation program) instead of just observing an association without conducting any intervention (examining the relationship between exercise and smoking behavior)
  • Random selection : Choosing participants at random or in a manner in which they are representative of the population that you wish to study
  • Randomization or random assignment : Randomly assigning participants to treatment and control groups, ensuring that there is no systematic bias between the research groups
  • Strict study protocol : Following specific procedures during the study so as not to introduce any unintended effects; for example, doing things differently with one group of study participants than you do with another group

Internal Validity Threats

Just as there are many ways to ensure internal validity, a list of potential threats should be considered when planning a study.

  • Attrition : Participants dropping out or leaving a study, which means that the results are based on a biased sample of only the people who did not choose to leave (and possibly who all have something in common, such as higher motivation)
  • Confounding : A situation in which changes in an outcome variable can be thought to have resulted from some type of outside variable not measured or manipulated in the study
  • Diffusion : This refers to the results of one group transferring to another through the groups interacting and talking with or observing one another; this can also lead to another issue called resentful demoralization, in which a control group tries less hard because they feel resentful over the group that they are in
  • Experimenter bias : An experimenter behaving in a different way with different groups in a study, which can impact the results (and is eliminated through blinding)
  • Historical events : May influence the outcome of studies that occur over a period of time, such as a change in the political leader or a natural disaster that occurs, influencing how study participants feel and act
  • Instrumentation : This involves "priming" participants in a study in certain ways with the measures used, causing them to react in a way that is different than they would have otherwise reacted
  • Maturation : The impact of time as a variable in a study; for example, if a study takes place over a period of time in which it is possible that participants naturally change in some way (i.e., they grew older or became tired), it may be impossible to rule out whether effects seen in the study were simply due to the impact of time
  • Statistical regression : The natural effect of participants at extreme ends of a measure falling in a certain direction due to the passage of time rather than being a direct effect of an intervention
  • Testing : Repeatedly testing participants using the same measures influences outcomes; for example, if you give someone the same test three times, it is likely that they will do better as they learn the test or become used to the testing process, causing them to answer differently

What Is External Validity in Research?

External validity refers to how well the outcome of a research study can be expected to apply to other settings. This is important because, if external validity is established, it means that the findings can be generalizable to similar individuals or populations.

External validity affirmatively answers the question: Do the findings apply to similar people, settings, situations, and time periods?

Population validity and ecological validity are two types of external validity. Population validity refers to whether you can generalize the research outcomes to other populations or groups. Ecological validity refers to whether a study's findings can be generalized to additional situations or settings.

Another term called transferability refers to whether results transfer to situations with similar characteristics. Transferability relates to external validity and refers to a qualitative research design.

Factors That Improve External Validity

If you want to improve the external validity of your study, there are many ways to achieve this goal. Factors that can enhance external validity include:

  • Field experiments : Conducting a study outside the laboratory, in a natural setting
  • Inclusion and exclusion criteria : Setting criteria as to who can be involved in the research, ensuring that the population being studied is clearly defined
  • Psychological realism : Making sure participants experience the events of the study as being real by telling them a "cover story," or a different story about the aim of the study so they don't behave differently than they would in real life based on knowing what to expect or knowing the study's goal
  • Replication : Conducting the study again with different samples or in different settings to see if you get the same results; when many studies have been conducted on the same topic, a meta-analysis can also be used to determine if the effect of an independent variable can be replicated, therefore making it more reliable
  • Reprocessing or calibration : Using statistical methods to adjust for external validity issues, such as reweighting groups if a study had uneven groups for a particular characteristic (such as age)

External Validity Threats

External validity is threatened when a study does not take into account the interaction of variables in the real world. Threats to external validity include:

  • Pre- and post-test effects : When the pre- or post-test is in some way related to the effect seen in the study, such that the cause-and-effect relationship disappears without these added tests
  • Sample features : When some feature of the sample used was responsible for the effect (or partially responsible), leading to limited generalizability of the findings
  • Selection bias : Also considered a threat to internal validity, selection bias describes differences between groups in a study that may relate to the independent variable—like motivation or willingness to take part in the study, or specific demographics of individuals being more likely to take part in an online survey
  • Situational factors : Factors such as the time of day of the study, its location, noise, researcher characteristics, and the number of measures used may affect the generalizability of findings

While rigorous research methods can ensure internal validity, external validity may be limited by these methods.

Internal Validity vs. External Validity

Internal validity and external validity are two research concepts that share a few similarities while also having several differences.

Similarities

One of the similarities between internal validity and external validity is that both factors should be considered when designing a study. This is because both have implications in terms of whether the results of a study have meaning.

Both internal validity and external validity are not "either/or" concepts. Therefore, you always need to decide to what degree a study performs in terms of each type of validity.

Each of these concepts is also typically reported in research articles published in scholarly journals . This is so that other researchers can evaluate the study and make decisions about whether the results are useful and valid.

Differences

The essential difference between internal validity and external validity is that internal validity refers to the structure of a study (and its variables) while external validity refers to the universality of the results. But there are further differences between the two as well.

For instance, internal validity focuses on showing a difference that is due to the independent variable alone. Conversely, external validity results can be translated to the world at large.

Internal validity and external validity aren't mutually exclusive. You can have a study with good internal validity but be overall irrelevant to the real world. You could also conduct a field study that is highly relevant to the real world but doesn't have trustworthy results in terms of knowing what variables caused the outcomes.

Examples of Validity

Perhaps the best way to understand internal validity and external validity is with examples.

Internal Validity Example

An example of a study with good internal validity would be if a researcher hypothesizes that using a particular mindfulness app will reduce negative mood. To test this hypothesis, the researcher randomly assigns a sample of participants to one of two groups: those who will use the app over a defined period and those who engage in a control task.

The researcher ensures that there is no systematic bias in how participants are assigned to the groups. They do this by blinding the research assistants so they don't know which groups the subjects are in during the experiment.

A strict study protocol is also used to outline the procedures of the study. Potential confounding variables are measured along with mood , such as the participants' socioeconomic status, gender, age, and other factors. If participants drop out of the study, their characteristics are examined to make sure there is no systematic bias in terms of who stays in.

External Validity Example

An example of a study with good external validity would be if, in the above example, the participants used the mindfulness app at home rather than in the laboratory. This shows that results appear in a real-world setting.

To further ensure external validity, the researcher clearly defines the population of interest and chooses a representative sample . They might also replicate the study's results using different technological devices.

Setting up an experiment so that it has both sound internal validity and external validity involves being mindful from the start about factors that can influence each aspect of your research.

It's best to spend extra time designing a structurally sound study that has far-reaching implications rather than to quickly rush through the design phase only to discover problems later on. Only when both internal validity and external validity are high can strong conclusions be made about your results.

Andrade C. Internal, external, and ecological validity in research design, conduct, and evaluation .  Indian J Psychol Med . 2018;40(5):498-499. doi:10.4103/IJPSYM.IJPSYM_334_18

San Jose State University. Internal and external validity .

Kemper CJ. Internal validity . In: Zeigler-Hill V, Shackelford TK, eds. Encyclopedia of Personality and Individual Differences . Springer International Publishing; 2017:1-3. doi:10.1007/978-3-319-28099-8_1316-1

Patino CM, Ferreira JC. Internal and external validity: can you apply research study results to your patients?   J Bras Pneumol . 2018;44(3):183. doi:10.1590/S1806-37562018000000164

Matthay EC, Glymour MM. A graphical catalog of threats to validity: Linking social science with epidemiology .  Epidemiology . 2020;31(3):376-384. doi:10.1097/EDE.0000000000001161

Amico KR. Percent total attrition: a poor metric for study rigor in hosted intervention designs .  Am J Public Health . 2009;99(9):1567-1575. doi:10.2105/AJPH.2008.134767

Kemper CJ. External validity . In: Zeigler-Hill V, Shackelford TK, eds. Encyclopedia of Personality and Individual Differences . Springer International Publishing; 2017:1-4. doi:10.1007/978-3-319-28099-8_1303-1

Desjardins E, Kurtz J, Kranke N, Lindeza A, Richter SH. Beyond standardization: improving external validity and reproducibility in experimental evolution . BioScience. 2021;71(5):543-552. doi:10.1093/biosci/biab008

Drude NI, Martinez Gamboa L, Danziger M, Dirnagl U, Toelch U. Improving preclinical studies through replications .  Elife . 2021;10:e62101. doi:10.7554/eLife.62101

Michael RS. Threats to internal & external validity: Y520 strategies for educational inquiry .

Pahus L, Burgel PR, Roche N, Paillasseur JL, Chanez P. Randomized controlled trials of pharmacological treatments to prevent COPD exacerbations: applicability to real-life patients . BMC Pulm Med . 2019;19(1):127. doi:10.1186/s12890-019-0882-y

By Arlin Cuncic, MA Arlin Cuncic, MA, is the author of The Anxiety Workbook and founder of the website About Social Anxiety. She has a Master's degree in clinical psychology.

internal reliability in qualitative research design refers to

  • Subscribe to journal Subscribe
  • Get new issue alerts Get alerts

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

Rigor or Reliability and Validity in Qualitative Research: Perspectives, Strategies, Reconceptualization, and Recommendations

Cypress, Brigitte S. EdD, RN, CCRN

Brigitte S. Cypress, EdD, RN, CCRN , is an assistant professor of nursing, Lehman College and The Graduate Center, City University of New York.

The author has disclosed that she has no significant relationships with, or financial interest in, any commercial companies pertaining to this article.

Address correspondence and reprint requests to: Brigitte S. Cypress, EdD, RN, CCRN, Lehman College and The Graduate Center, City University of New York, PO Box 2205, Pocono Summit, PA 18346 ( [email protected] ).

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site ( www.dccnjournal.com ).

Issues are still raised even now in the 21st century by the persistent concern with achieving rigor in qualitative research. There is also a continuing debate about the analogous terms reliability and validity in naturalistic inquiries as opposed to quantitative investigations. This article presents the concept of rigor in qualitative research using a phenomenological study as an exemplar to further illustrate the process. Elaborating on epistemological and theoretical conceptualizations by Lincoln and Guba, strategies congruent with qualitative perspective for ensuring validity to establish the credibility of the study are described. A synthesis of the historical development of validity criteria evident in the literature during the years is explored. Recommendations are made for use of the term rigor instead of trustworthiness and the reconceptualization and renewed use of the concept of reliability and validity in qualitative research, that strategies for ensuring rigor must be built into the qualitative research process rather than evaluated only after the inquiry, and that qualitative researchers and students alike must be proactive and take responsibility in ensuring the rigor of a research study. The insights garnered here will move novice researchers and doctoral students to a better conceptual grasp of the complexity of reliability and validity and its ramifications for qualitative inquiry.

Conducting a naturalistic inquiry in general is not an easy task. Qualitative studies are more complex in many ways than a traditional investigation. Quantitative research follows a structured, rigid, preset design with the methods all prescribed. In naturalistic inquiries, planning and implementation are simultaneous, and the research design can change or is emergent. Preliminary steps must be accomplished before the design is fully implemented from making initial contact and gaining entry to site, negotiating consent, building and maintaining trust, and identifying participants. The steps of a qualitative inquiry are also repeated multiple times during the process. As the design unfolds, the elements of this design are put into place, and the inquirer has minimal control and should be flexible. There is continuous reassessment and reiteration. Data collection is carried out using multiple techniques, and whatever the source maybe, it is the researcher who is the sole instrument of the study and the primary mode of collecting the information. All the while during these processes, the qualitative inquirer must be concerned with rigor. 1 Appropriate activities must be conducted to ensure that rigor had been attended to in the research process rather than only adhering to set criteria for rigor after the completion of the study. 1-4

Reliability and validity are 2 key aspects of all research. Researchers assert that rigor of qualitative research equates to the concepts reliability and validity and all are necessary components of quality. 5,6 However, the precise definition of quality has created debates among naturalistic inquirers. Other scholars consider different criteria to describe rigor in qualitative research process. 7 The 2 concepts of reliability and validity have been operationalized eloquently in quantitative texts but at the same time were deemed not pertinent to qualitative inquiries in the 1990s. Meticulous attention to the reliability and validity of research studies is particularly vital in qualitative work, where the researcher's subjectivity can so readily cloud the interpretation of the data and where research findings are often questioned or viewed with skepticism by the scientific community (Brink, 1993).

This article will discuss the issue of rigor in relation to qualitative research and further illustrate the process using a phenomenological study as an exemplar based on Lincoln and Guba's 1 (1985) techniques. This approach will clarify and define some of these complex concepts. There are numerous articles about trustworthiness in the literature that are too complex, confusing, and full of jargon. Some of these published articles also discuss rigor vis-à-vis reliability and validity in a very complicated way. Rigor will be first defined followed by how “reliability and validity” should be applied to qualitative research methods during the inquiry (constructive) rather than only post hoc evaluation. Strategies to attain reliability and validity will be described including the criteria and techniques for ensuring its attainment in a study. This discussion will critically focus on the misuse or nonuse of the concept of reliability and validity in qualitative inquiries, reestablish its importance, and relate both to the concept of rigor. Reflecting on my own research experience, recommendations for the renewed use of the concept of reliability and validity in qualitative research will be presented.

RIGOR VERSUS TRUSTWORTHINESS

Rigor of qualitative research continues to be challenged even now in the 21st century—from the very idea that qualitative research alone is open to questions, so with the terms rigor and trustworthiness . It is critical to understand rigor in research. Rigor is simply defined as the quality or state of being very exact, careful, or with strict precision 8 or the quality of being thorough and accurate. 9 The term qualitative rigor itself is an oxymoron, considering that qualitative research is a journey of explanation and discovery that does not lend to stiff boundaries. 10

Rigor and truth are always of concern for qualitative research. 11 Rigor has also been used to express attributes related to the qualitative research process. 12,13 Per Morse et al 4 (2002), without rigor, research is worthless, becomes fiction, and loses its use. The authors further defined rigor as the strength of the research design and the appropriateness of the method to answer the questions. It is expected that qualitative studies be conducted with extreme rigor because of the potential of subjectivity that is inherent in this type of research. This is a more difficult task when dealing with narratives and people than numbers and statistics. 14 Davies and Dodd 13 (2002) refer rigor to the reliability and validity of research and that, inherent to the conception, the concept is a quantitative bias. Several researchers argued that reliability and validity pertain to quantitative research, which is unrelated or not pertinent to qualitative inquiry because it is aligned with the positivist view. 15 It is also suggested that a new way of looking at reliability and validity will ensure rigor in qualitative inquiry. 1,16 From Lincoln and Guba's crucial work in the 1980s, reliability and validity were replaced with the concept “trustworthiness.” Lincoln and Guba 1 (1985) were the first to address rigor in their model of trustworthiness of qualitative research. Trustworthiness is used as the central concept in their framework to appraise the rigor of a qualitative study.

Trustworthiness is described in different ways by researchers. Trustworthiness refers to quality, authenticity, and truthfulness of findings of qualitative research. It relates to the degree of trust, or confidence, readers have in results. 14 Yin 17 (1994) describes trustworthiness as a criterion to judge the quality of a research design. Trustworthiness addressed methods that can ensure one has carried out the research process correctly. 18 Manning 19 (1997) considered trustworthiness as parallel to the empiricist concepts of internal and external validity, reliability, and objectivity. Seale 20 (1999) asserted that trustworthiness of a research study is based on the concepts of reliability and validity. Guba 2 (1981), Guba and Lincoln 3 (1982), and Lincoln and Guba 1 (1985) refer to trustworthiness as something that evolved from 4 major concerns that relate to it in which the set of criteria were based on. Trustworthiness is a goal of the study and, at the same time, something to be judged during the study and after the research is conducted. The 4 major traditional criteria are summarized into 4 questions about truth value, applicability, consistency, and neutrality. From these, they proposed 4 analogous terms within the naturalistic paradigm to replace the rationalistic terms: credibility, transferability, dependability, and confirmability. 1 For each of these 4 naturalistic terms are research activities or steps that the inquirer should be engage in to be able to safeguard or satisfy each of the previously mentioned criteria and thus attain trustworthiness (Supplemental Digital Content 1, https://links.lww.com/DCCN/A18 ). Guba and Lincoln 1 (1985) stated:

The criteria aid inquirers in monitoring themselves and in guiding activities in the field, as a way of determining whether or not various stages in the research are meeting standards for quality and rigor. Finally, the same criteria may be used to render ex-post facto judgments on the products of research, including reports, case studies, or proposed publications.

Standards and checklist were developed in the 1990s based on Lincoln and Guba's 1 (1985) established criteria, which were then discarded in favor of principles. 21 These standards and checklists consisted of long list of strategies used by qualitative researchers, which were thought to cause harm because of the confusion on which strategies were appropriate for certain designs or what type of naturalistic inquiry is being evaluated. Thus, researchers interpreted missing data as faults and flaws. 21 Morse 21 (2012) further claimed that these standards became the qualitative researchers' “worst enemies” and such an approach was not appropriate. Guba and Lincoln 18 (1989) later proposed a set of guidelines for post hoc evaluation of a naturalistic inquiry to ensure trustworthiness based on the framework of naturalism and constructivism and beyond the conventional methodological ideas. The aspects of their criteria have been fundamental to development of standards used to evaluate the quality of qualitative inquiry. 4

THE RIGOR DEBATES: TRUSTWORTHINESS OR RELIABILITY AND VALIDITY?

A research endeavor, whether quantitative or qualitative, is always evaluated for its worth and merits by peers, experts, reviewers, and readers. Does this mean that a study is differentiated between “good” and “bad”? What determines a “good” from a “bad” inquiry? For a quantitative study, this would mean determining the reliability and validity, and for qualitative inquiries, this would mean determining rigor and trustworthiness. According to Golafshani 22 (2003), if the issues of reliability, validity, trustworthiness, and rigor are meant to differentiating a “good” from “bad” research, then testing and increasing the reliability, validity, trustworthiness, and rigor will be important to the research in any paradigm. However, do reliability and validity in quantitative research equate totally to rigor and trustworthiness in qualitative research? There are many ways to assess the “goodness” of a naturalistic inquiry. Guba and Lincoln 18 (1989) asked, “‘What standards ought apply?’… goodness criteria like paradigms are rooted in certain assumptions. Thus, it is not appropriate to judge constructivist evaluations by positivistic criteria or standards or vice versa. To each its proper and appropriate set.”

Reliability and validity are analogues and are determined differently than in quantitative inquiry. 21 The nature and purpose of the quantitative and qualitative traditions are also different that it is erroneous to apply the same criteria of worthiness or merit. 23,24 The qualitative researcher should not focus on quantitatively defined indicators of reliability and validity, but that does not mean that rigorous standards are not appropriate for evaluating findings. 11 Evaluation, like democracy, is a process that, to be at its best, depends on the application of enlightened and informed self-interest. 18 Agar 24 (1986), on the other hand, suggested that terms such as reliability and validity are comparative with the quantitative view and do not fit the details of qualitative research. A different language is needed to fit the qualitative view. From Leininger 25 (1985), Krefting 23 (1991) asserted that addressing reliability and validity in qualitative research is such a different process that quantitative labels should not be used. The incorrect application of the qualitative criteria of rigor to studies is as problematic as the application of inappropriate quantitative criteria. 23 Smith 26 (1989) argued that, for qualitative research, this means that the basis of truth or trustworthiness becomes a social agreement. He emphasizes that what is judged true or trustworthy is what we can agree, conditioned by time and place, and is true or trustworthy. Validity standards in qualitative research are also even more challenging because of the necessity to incorporate rigor and subjectivity, as well as creativity into the scientific process. 27 Furthermore, Leininger 25 (1985) claimed that it is not whether the data are reliable or valid but how the terms reliability and validity are defined. Aside from the debate whether reliability and validity criteria should be used similarly in qualitative inquiries, there is also an issue of not using the concepts at all in naturalistic studies.

Designing a naturalistic inquiry is very different from a traditional quantitative notion of design and that defining a “good” qualitative inquiry is controversial and has gone through many changes. 21 First is the confusion on the use of terminologies “rigor” and “trustworthiness.” Morse 28 (2015) suggested that it is time to return to the terminology of mainstream social science and to use “rigor” rather than “trustworthiness.” Debates also continue about why some qualitative researchers do not use the concept of reliability and validity in their studies referring to Lincoln and Guba's 1 (1985) criteria for trustworthiness, namely, transferability, dependability, confirmability, and credibility. Morse 28 (2015) further suggested replacing these criteria to reliability, validity, and generalizability. The importance and centrality of reliability and validity to qualitative inquiries have in some way been disregarded even in the current times. Researchers from the United Kingdom and Europe continue to do so but not much so in North America. 4 According to Morse 21 (2012), this gives the impression that these concepts are of no concern to qualitative research. Morse 29 (1999) stated, “Is the terminology worth making a fuzz about?”, when Lincoln and Guba 1 (1985) described trustworthiness and reliability and validity as analogs. Morse 29 (1999) further articulated that:

To state that reliability and validity are not pertinent to qualitative inquiry places qualitative research in the realm of being not reliable and not valid. Science is concerned with rigor, and by definition, good rigorous research must be reliable and valid. If qualitative research is unreliable and invalid, then it must not be science. If it is not science, then why should it be funded, published, implemented, or taken seriously?

RELIABILITY AND VALIDITY IN QUALITATIVE RESEARCH

Reliability and validity should be taken into consideration by qualitative inquirers while designing a study, analyzing results, and judging the quality of the study, 30 but for too long, the criteria used for evaluating rigor are applied after a research is completed—a considerably wrong tactic. 4 Morse and colleagues 4 (2002) argued that, for reliability and validity to be actively attained, strategies for ensuring rigor must be built into the qualitative research process per se not to be proclaimed only at the end of the inquiry. The authors suggest that focusing on strategies to establish rigor at the completion of the study (post hoc), rather than during the inquiry, exposes the investigators to the risk of missing and addressing serious threats to the reliability and validity until it is too late to correct them. They further asserted that the interface between reliability and validity is important especially for the direction of the analysis process and the development of the study itself.

Reliability

In the social sciences, the whole notion of reliability in and of itself is problematic. 31 The scientific aspect of reliability assumes that repeated measures of a phenomenon (with the same results) using objective methods establish the truth of the findings. 32-35 Merriam 36 (1995) stated that, “The more times the findings of a study can be replicated, the more stable or reliable the phenomenon is thought to be.” In other words, it is the idea of replicability, 22,34,37 repeatability, 21,22,26,30,31,36,38-40 and stability of results or observation. 25,39,41 The issues are that human behaviors and interactions are never static or the same. Measurements and observations can also be repeatedly wrong. Furthermore, researchers have argued that the concept reliability is misleading and has no relevance in qualitative research related to the notion of “measurement method,” as in quantitative studies. 40,42 It is a fact that quantitative research is supported by the positivist or scientific paradigm that regards the world as made up of observable, measurable facts. Qualitative research, on the other hand, produces findings not arrived at by means of statistical procedures or other means of quantification. On the basis of the constructivist paradigm, it is a naturalistic inquiry that seeks to understand phenomena in context-specific settings in which the researcher does not attempt to manipulate the phenomenon of interest. 23 If reliability is used as a criterion in qualitative research, it would mean that the study is “not good.” A thorough description of the entire research process that allows for intersubjectivity is what indicates good quality when using qualitative methodology. Reliability is based on consistency and care in the application of research practices, which are reflected in the visibility of research practices, analysis, and conclusions, reflected in an open account that remains mindful of the partiality and limits of the research findings. 13 Reliability and similar terms are presented in Supplemental Digital Content 2 (see Supplemental Digital Content 2, https://links.lww.com/DCCN/A19 ).

Validity is broadly defined as the state of being well grounded or justifiable, relevant, meaningful, logical, confirming to accepted principles or the quality of being sound, just, and well founded. 8 The issues surrounding the use and nature of the term validity in qualitative research are controversial and many. It is a highly debated topic both in social and educational research and is still often a subject of debate. 43 The traditional criteria for validity find their roots in a positivist tradition, and to an extent, positivism has been defined by a systematic theory of validity. 22 Validity is rooted from empirical conceptions as universal laws, evidence, objectivity, truth, actuality, deduction, reason, fact, and mathematical data, to name only a few. Validity in research is concerned with the accuracy and truthfulness of scientific findings. 44 A valid study should demonstrate what actually exists and is accurate, and a valid instrument or measure should actually measure what it is supposed to measure. 5,22,29,31,42,45

Novice researchers can become easily perplexed in attempting to understand the notion of validity in qualitative inquiry. 44 There is a multiple array of terms similar to validity in the literature, which the authors equate to same such as authenticity, goodness, adequacy, trustworthiness, verisimilitude, credibility, and plausibility. 1,45-51 Validity is not a single, fixed, or universal concept but rather a contingent construct, inescapably grounded in the processes and intentions of particular research methodologies. 39 Some qualitative researchers have argued that the term validity is not applicable to qualitative research and have related it to terms such as quality, rigor, and trustworthiness. 1,13,22,38,42,52-54 I argue that the concepts of reliability and validity are overarching constructs that can be appropriately used in both quantitative and qualitative methodologies. To validate means to investigate, to question, and to theorize, which are all activities to ensure rigor in a qualitative inquiry. For Leininger 25 (1985), the term validity in a qualitative sense means gaining knowledge and understanding of the nature (ie, the meaning, attributes, and characteristics) of the phenomenon under study. A qualitative method seeks for a certain quality that is typical for a phenomenon or that makes the phenomenon different than others.

Some naturalistic inquirers agree that assuring validity is a process whereby ideals are sought through attention to specified criteria, and appropriate techniques are used to address any threats to validity of a naturalistic inquiry. However, other researchers argue that procedures and techniques are not an assurance of validity and will not necessarily produce sound data or credible conclusions. 38,48,55 Thus, some argued that they should abandon the concept of validity and seek alternative criteria with which to judge their work. Criteria are the standards or rules to be upheld as ideals in qualitative research on which a judgment or decisions may be based, 4,56 whereas the techniques are the methods used to diminish identified validity threats. 56 Criteria, for some researchers, are used to test the quality of the research design, whereas for some, they are the goal of the study. There is also the trend to treat standards, goals, and criteria synonymously. I concur with Morse 29 (1999) that introducing parallel terminology and criteria diminishes qualitative inquiry from mainstream science and scientific legitimacy. The development of alternative criteria compromises the issue of rigor. We must work to have a consensus toward criteria and terminology that are used in mainstream science and how it is attained within the qualitative inquiry during the research process rather than at the end of the study. Despite all these, researchers developed validity criteria and techniques during the years. A synthesis of validity criteria development is summarized in Supplemental Digital Content 3 (see Supplemental Digital Content 3, https://links.lww.com/DCCN/A20 ). The techniques for demonstrating validity are presented in Supplemental Digital Content 4 (see Supplemental Digital Content 4, https://links.lww.com/DCCN/A21 ).

Reliability and Validity as Means in Ensuring the Quality of Findings of a Phenomenological Study in Intensive Care Unit

Reliability and validity are 2 factors that any qualitative researcher should be concerned about while designing a study, analyzing results, and judging its quality. Just as the quantitative investigator must attend to the question of how external and internal validity, reliability, and objectivity will be provided for in the design, so must the naturalistic inquirer arrange for credibility, transferability, dependability, and confirmability. 1 Lincoln and Guba 1 (1985) clearly established these 4 criteria as benchmarks for quality based on the identification of 4 aspects of trustworthiness that are relevant to both quantitative and qualitative studies, which are truth value, applicability, consistency, and neutrality. Guba 2 (1981) stated, “It is to these concerns that the criteria must speak.”

Rigor of a naturalistic inquiry such as phenomenology may be operationalized using the criteria of credibility, transferability, dependability, and confirmability. This phenomenological study aimed to understand and illuminate the meaning of the phenomenon of the lived experiences of patients, their family members, and the nurses during critical illness in the intensive care unit (ICU). From Lincoln and Guba 1 (1985), I first asked the question, “How can I persuade my audience that the research findings of my inquiry are worth paying attention to, and worth taking account of?” My answers to these questions were based on the identified 4 criteria set forth by Lincoln and Guba 1 (1985).

Credibility, the accurate and truthful depiction of a participant's lived experience, was achieved in this study through prolonged engagement and persistent observation to learn the context of the phenomenon in which it is embedded and to minimize distortions that might creep into the data. To achieve this, I spent 6 months with nurses, patients, and their families in the ICU to become oriented to the situation and also to build trust and rapport with the participants. Peer debriefing was conducted through meetings and discussions with an expert qualitative researcher to allow for questions and critique of field journals and research activities. Triangulation was achieved by cross-checking the data and interpretations within and across each category of participants by 2 qualitative researchers. Member checks were accomplished by constantly checking data and interpretations with the participants from which data were solicited.

Transferability was enhanced by using purposive sampling method and providing a thick description and a robust data with a wide possible range of information through the detailed and accurate descriptions of the patients, their family members, and the nurses' lived ICU experiences and by continuously returning to the texts. In this study, recruitment of participants and data collection continued until the data are saturated and complete and replicate. According to Morse et al 4 (2002), interviewing additional participants is for the purpose of increasing the scope, adequacy, and appropriateness of the data. I immersed myself into the phenomenon to know, describe, and understand it fully, comprehensively, and thoroughly. Special care was given to the collection, identification, and analysis of all data pertinent to the study. The audiotaped data were meticulously transcribed by a professional transcriber for future scrutiny. During the analysis phase, every attempt was made to document all aspects of the analysis. Analysis in qualitative research refers to the categorization and ordering of information in such a way as to make sense of the data and to writing a final report that is true and accurate. 36 Every effort was made to coordinate methodological and analytical materials. After I categorized and was able to make sense of the transcribed data, all efforts were exhausted to illuminate themes and descriptors as they emerge.

Lincoln and Guba 1 (1985) use “dependability” in qualitative research, which closely corresponds to the notion of “reliability” in quantitative research. Dependability was achieved by having 2 expert qualitative nursing researchers review the transcribed material to validate the themes and descriptors identified. To be able to validate my findings related to the themes, a doctoral-prepared nursing colleague was asked to review some of the transcribed materials. Any new themes and descriptors illuminated by my colleague were acknowledged and considered. It was then compared with my own thematic analysis from the entire participant's transcribed data. If the theme identified by the colleague did not appear in my own thematic analysis, it was agreed by both analysts not to use the said theme. It was my goal that both analysts agree on the findings related to themes and meanings within the transcribed material.

Confirmability was met by maintaining a reflexive journal during the research process to keep notes and document introspections daily that would be beneficial and pertinent during the study. An audit trail also took place to examine the processes whereby data were collected and analyzed and interpretations were made. The audit trail took the form of documentation (the actual interview notes taken) and a running account of the process (my daily field journal). I maintained self-awareness of my role as the sole instrument of this study. After each interview, I retired in 1 private room to document additional perceptions and recollections from the interviews (Supplemental Digital Content 5, https://links.lww.com/DCCN/A22 ).

Through reflexivity and bracketing, I was always on guard of my own biases, assumptions, beliefs, and presuppositions that I might bring to the study but was also aware that complete reduction is not possible. Van Manen 44 (1990) stated that “if we simply try to forget or ignore what we already know, we may find that the presuppositions persistently creep back into our reflections.” During data collection and analysis, I made my orientation and preunderstanding of critical illness and critical care explicit but held them deliberately at bay and bracketed them. Aside from Lincoln and Guba's 1 (1985) 4 criteria for trustworthiness, a question arises as to the reliability of the researcher as the sole instrument of the study.

Reliability related to the researcher as the sole instrument who conducted the data collection and analysis is a limitation of any phenomenological study. The use of humans as instruments is not a new concept. Lincoln and Guba 1 (1985) articulated that humans uniquely qualify as the instrument of choice for naturalistic inquiry. Some of the giants of conventional inquiry have recognized that humans can provide data very nearly as reliable as that produced by “more” objective means. These are formidable characteristics, but they are meaningless if the human instrument is not also trustworthy. However, no human instrument is expected to be perfect. Humans have flaws, and errors could be committed. When Lincoln and Guba 1 (1985) asserted that qualitative methods come more easily to hand when the instrument is a human being, they mean that the human as instrument is inclined toward methods that are extensions of normal activities. They believe that the human will tend therefore toward interviewing, observing, mining available documents and records, taking account of nonverbal cues, and interpreting inadvertent unobtrusive measures. All of which are complex tasks. In addition, one would not expect an individual to function adequately as human instruments without an extensive background or training and experience. This study has reliability in that I have acquired knowledge and the required training for research at a doctoral level with the professional and expert guidance of a mentor. As Lincoln and Guba 1 (1985) said, “Performance can be improved…when that learning is guided by an experienced mentor, remarkable improvements in human instrumental performance can be achieved.” Whereas reliability in quantitative research depends on instrument construction, in qualitative research, the researcher is the instrument of the study. 31 A reliable research is a credible research. Credibility of a qualitative research depends on the ability and effort of the researcher. 22 We have established that a study can be reliable without being valid, but a study cannot be valid without being reliable.

Establishing validity is a major challenge when a qualitative research project is based on a single, cross-sectional, unstructured interview as the basis for data collection. How do I make judgments about the validity of the data? In qualitative research, the validity of the findings is related to the careful recording and continual verification of the data that the researcher undertakes during the investigative practice. If the validity or trustworthiness can be maximized or tested, then more credible and defensible result may lead to generalizability as the structure for both doing and documenting high-quality qualitative research. Therefore, the quality of a research is related to generalizability of the result and thereby to the testing and increasing of the validity or trustworthiness of the research.

One potential threat to validity that researchers need to consider is researcher bias. Researcher bias is frequently an issue because qualitative research is open and less structured than quantitative research. This is because qualitative research tends to be exploratory. Researcher bias tends to result from selective observation and selective recording of information and from allowing one's personal views and perspectives to affect how data are interpreted and how the research is conducted. Therefore, it is very important that the researchers are aware of their own perceptions and opinions because they may taint their research findings and conclusions. I brought all past experiences and knowledge into the study but learned to set aside my own strongly held perceptions, preconceptions, and opinions. I truly listened to the participants to learn their stories, experiences, and meanings.

The key strategy used to understand researcher bias is called reflexivity. Reflexivity means that the researchers actively engage in critical self-reflection about their potential biases and predispositions that they bring to the qualitative study. Through reflexivity, researchers become more self-aware and monitor and attempt to control their biases. Phenomenological researchers can recognize that their interpretation is correct because the reflective process awakens an inner moral impulse. 4,59 I did my best to be always on guard of my own biases, preconceptions, and assumptions that I might bring to this study. Bracketing was also applied.

Husserl 60 (1931) has made some key conceptual elaborations, which led him to assert that an attempt to hold a previous belief about the phenomena under study in suspension to perceive it more clearly is needed in phenomenological research. This technique is called bracketing. Bracketing is another strategy used to control bias. Husserl 60 (1931) explained further that phenomenological reduction is the process of defining the pure essence of a psychological phenomenon. Phenomenological reduction is a process whereby empirical subjectivity is suspended, so that pure consciousness may be defined in its essential and absolute “being.” This is accomplished by a method of bracketing empirical data away from consideration. Bracketing empirical data away from further investigation leaves pure consciousness, pure phenomena, and pure ego as the residue of phenomenological reduction. Husserl 60 (1931) uses the term epoche (Greek word for “a cessation”) to refer to this suspension of judgment regarding the true nature of reality. Bracketed judgment is an epoche or suspension of inquiry, which places in brackets whatever facts belong to essential “being.”

Bracketing was conducted to separate the assumptions and biases from the essences and therefore achieve an understanding of the phenomenon as experienced by the participants of the study. The collected and analyzed data were presented to the participants, and they were asked whether the narrative is accurate and a true reflection of their experience. My interpretation and descriptions of the narratives were presented to the participants to achieve credibility. They were given the opportunity to review the transcripts and modify it if they wished to do so. As I was the one who served as the sole instrument in obtaining data for this phenomenological study, my goal was that my perceptions would reflect the participant's ICU experiences and that the participants would be able to see their lived experience through the researcher's eyes. Because qualitative research designs are flexible and emergent in nature, there will always be study limitations.

Awareness of the limitations of a research study is crucial for researchers. The purpose of this study was to understand the ICU experiences of patients, their family members, and the nurses during critical illness. One limitation of this phenomenological study as a naturalistic inquiry was the inability of the researcher to fully design and provide specific ideas needed for the study. According to Lincoln and Guba 1 (1985), naturalistic studies are virtually impossible to design in any definitive way before the study is actually undertaken. The authors stated:

Designing a naturalistic study means something very different from the traditional notion of “design”—which as often as not meant the specification of a statistical design with its attendant field conditions and controls. Most of the requirements normally laid down for a design statement cannot be met by naturalists because the naturalistic inquiry is largely emergent.

Within the naturalistic paradigm, designs must be emergent rather than preordinate because (1) meaning is determined by context to such a great extent. For this particular study, the phenomenon and context were the experience of critical illness in the ICU; (2) the existence of multiple realities constrains the development of a design based on only 1 (the investigator's) construction; (3) what will be learned at a site is always dependent on the interaction between the investigator and the context, and the interaction is also not fully predictable; and (4) the nature of mutual shapings cannot be known until they are witnessed. These factors underscore the indeterminacy under which naturalistic inquirer functions. The design must therefore be “played by ear”; it must unfold, cascade, and emerge. It does not follow, however, that, because not all of the elements of the design can be prespecified in a naturalistic inquiry, none of them can. Design in the naturalistic sense means planning for certain broad contingencies without however indicating exactly what will be conducted on relation to each. 1

Reliability and validity are such fundamental concepts that should be continually operationalized to meet the conditions of a qualitative inquiry. Morse et al 4,29 (2002) articulated that “by refusing to acknowledge the centrality of reliability and validity in qualitative methods, qualitative methodologists have inadvertently fostered the default notion that qualitative research must therefore be unreliable and invalid, lacking in rigor, and unscientific.” Sparkes 59 (2001) asserted that Morse et al 4,26 (2002) is right in warning us that turning our backs on such fundamental concepts as validity could cost us dearly. This will in turn affect how we mentor novices, early career researchers, and doctoral students in their qualitative research works.

Reliability is inherently integrated and internally needed to attain validity. 1,26 I concur with the use of the term rigor rather than trustworthiness in naturalistic studies. I have also discussed that I accede that strategies for ensuring rigor must be built into the qualitative research process per se rather than evaluated only after the inquiry is conducted. Threats to reliability and validity cannot be actively addressed by using standards and criteria applied at the end of the study. Ensuring rigor must be upheld by the researcher during the investigation rather than the external judges of the completed study. Whether a study is quantitative or qualitative, rigor is a desired goal that is met through the inclusion of different philosophical perspectives inherent in a qualitative inquiry and the strategies that are specific to each methodological approach including the verification techniques to be observed during the research process. It also involves the researcher's creativity, sensitivity, flexibility, and skill in using the verification strategies that determine the reliability and validity of the evolving study.

Some naturalistic inquirers agree that assuring validity is a process whereby ideals are sought through attention to specified criteria, and appropriate techniques are used to address any threats to validity of a naturalistic inquiry. However, other researchers argue that procedures and techniques are not an assurance of validity and will not necessarily produce sound data or credible conclusions. 38,48,55 Thus, some argued that they should abandon the concept of validity and seek alternative criteria with which to judge their work.

Lincoln and Guba's 1 (1985) standards of validity demonstrate the necessity and convenience of overarching principles to all qualitative research, yet there is a need for a reconceptualization of criteria of validity in qualitative research. The development of validity criteria in qualitative research poses theoretical issues, not simply technical problems. 60 Whittemore et al 58 (2001) explored the historical development of validity criteria in qualitative research and synthesized the findings that reflect a contemporary reconceptualization of the debate and dialogue that have ensued in the literature during the years. The authors further presented primary (credibility, authenticity, criticality, and integrity) and secondary (explicitness, vividness, creativity, thoroughness, congruence, and sensitivity) validity criteria to be used in the evaluative process. 56 Before the work of Whittemore and colleagues, 58 Creswell and Miller 48 (2000) asserted that the constructivist lens and paradigm choice should guide validity evaluation and procedures from the perspective of the researcher (disconfirming evidence), the study participants (prolonged engagement in the field), and external reviewers/readers (thick, rich description). Morse et al 4 in 2002 presented 6 major evaluation criteria for validity and asserted that they are congruent and are appropriate within the philosophy of the qualitative tradition. These 6 criteria are credibility, confirmability, meaning in context, recurrent patterning, saturation, and transferability. Synthesis of validity criteria is presented in Supplemental Digital Content 3 (see Supplemental Digital Content 3, https://links.lww.com/DCCN/A20 ).

Common validity techniques in qualitative research refer to design consideration, data generation, analytic procedures, and presentation. 56 First is the design consideration. Developing a self-conscious design, the paradigm assumption, the purposeful choice of small sample of informants relevant to the study, and the use of inductive approach are some techniques to be considered. Purposive sampling enhances the transferability of the results. Interpretivist and constructivist inquiry follows an inductive approach that is flexible and emergent in design with some uncertainty and fluidity within the context of the phenomenon of interest 56,58 and not based on a set of determinate rules. 61 The researcher does not work with a priori theory; rather, these are expected to emerge from the inquiry. Data are analyzed inductively from specific, raw units of information to subsuming categories to define questions that can be followed up. 1 Qualitative studies also follow a naturalistic and constructivist paradigm. Creswell and Miller 48 (2000) suggest that the validity is affected by the researchers' perception of validity in the study and their choice of paradigm assumption. Determining fit of paradigm to focus is an essential aspect of a naturalistic inquiry. 1 Paradigms rest on sets of beliefs called axioms. 1 On the basis of the naturalistic axioms, the researcher should ask questions related to multiplicity or complex constructions of the phenomenon, the degree of investigator-phenomenon interaction and the indeterminacy it will introduce into the study, the degree of context dependence, whether values are likely to be crucial to the outcome, and the constraints that may be placed on the researcher by a variety of significant others. 1

Validity during data generation is evaluated through the researcher's ability to articulate data collection decisions, demonstrate prolonged engagement and persistent observation, provide verbatim transcription, and achieve data saturation. 56 Methods are means to collect evidence to support validity, and this refers to the data obtained by considering a context for a purpose. The human instrument operating in an indeterminate situation falls back on techniques such as interview, observation, unobtrusive measures, document and record analysis, and nonverbal cues. 1 Others remarked that rejecting methods or technical procedures as assurance of truth, thus validity of a qualitative study, lies in the skills and sensitivities of the researchers and how they use themselves as a knower and an inquirer. 57,62 The understanding of the phenomenon is valid if the participants are given the opportunity to speak freely according to their own knowledge structures and perceptions. Validity is therefore achieved when using the method of open-ended, unstructured interviews with strategically chosen participants. 42 We also know that a thorough description of the entire research process enabling unconditional intersubjectivity is what indicates good quality when using a qualitative method. This enhances a clearer and better analysis of data.

Analytical procedures are vital in qualitative research. 56 Not very much can be said about data analysis in advance of a qualitative study. 1 Data analysis is not an inclusive phase that can be marked out as occurring at some singular time during the inquiry. 1 It begins from the very first data collection to facilitate the emergent design and grounding of theory. Validity in a study thus is represented by truthfulness of findings after a careful analysis. 56 Consequently, qualitative researchers seek to illuminate and extrapolate findings to similar situations. 22,63 It is a fact that the interpretations of any given social phenomenon may reflect, in part, the biases and prejudices the interpreters bring to the task and the criteria and logic they follow in completing it. 64 In any case, individuals will draw different conclusions to the debate surrounding validity and will make different judgments as a result. 50 There is a wide array of analytic techniques that the qualitative researcher can choose from based on the contextual factors that will help contribute to the decision as to which technique will optimally reflect specific criteria of validity. 65 Presentation of findings is accomplished by providing an audit trail and evidence that support interpretations, acknowledging the researcher's perspective and providing thick descriptions. Morse et al 4 in 2002 set forth strategies for ensuring validity that include investigator responsiveness and verification through methodological coherence, theoretical sampling and sampling adequacy, an active analytic stance, and saturation. The authors further stated that “these strategies, when used appropriately, force the researcher to correct both the direction of the analysis and the development of the study as necessary, thus ensuring reliability and validity of the completed project (p17). Recently in 2015, Morse 28 presented that the strategies for ensuring validity in a qualitative study are prolonged engagement, persistent observation, thick and rich description, negative case analysis, peer review or debriefing, clarifying researcher's bias, member checking, external audits, and triangulation. These strategies can be upheld with the help of an expert mentor who can in turn guide and affect the reliability and validity of early career researchers and doctoral students' qualitative research works. Techniques for demonstrating validity are summarized in Supplemental Digital Content 4 (see Supplemental Digital Content 4, https://links.lww.com/DCCN/A21 ).

Qualitative researchers and students alike must be proactive and take responsibility in ensuring the rigor of a research study. A lot of times, rigor is at the backseat in some researchers and doctoral students' work related to their novice abilities, lack of proper mentorship, and issues with time and funding. Students should conduct projects that are smaller in scope guided by an expert naturalistic inquirer to come up with the product with depth and, at the same time, gain the grounding experience necessary to become an excellent researcher. Attending to rigor throughout the research process will have important ramifications for qualitative inquiry. 4,26

Qualitative research is not intended to be scary or beyond the grasp of novices and doctoral students. Conducting a naturalistic inquiry is an experience of exploration, discovery, description, and understanding of a phenomenon that transcends one's own research journey. Attending to the rigor of qualitative research is a vital part of the investigative process that offers critique and thus further development of the science.

  • Cited Here |
  • Google Scholar

Phenomenology; Qualitative research; Reliability; Rigor; Validity

Supplemental Digital Content

  • DCCN_2017_04_11_CYPRESS_DCCN-D-16-00060_SDC1.pdf; [PDF] (3 KB)
  • DCCN_2017_04_11_CYPRESS_DCCN-D-16-00060_SDC2.pdf; [PDF] (4 KB)
  • DCCN_2017_04_11_CYPRESS_DCCN-D-16-00060_SDC3.pdf; [PDF] (78 KB)
  • DCCN_2017_04_11_CYPRESS_DCCN-D-16-00060_SDC4.pdf; [PDF] (70 KB)
  • DCCN_2017_04_11_CYPRESS_DCCN-D-16-00060_SDC5.pdf; [PDF] (4 KB)
  • + Favorites
  • View in Gallery

Readers Of this Article Also Read

Family presence during resuscitation: the education needs of critical care..., increasing access to palliative care services in the intensive care unit, educational interventions to improve support for family presence during..., nursing practices and policies related to family presence during resuscitation.

Qualitative Researcher Dr Kriukow

Articles and blog posts

Validity and reliability in qualitative research.

internal reliability in qualitative research design refers to

What is Validity and Reliability in Qualitative research?

In Quantitative research, reliability refers to consistency of certain measurements, and validity – to whether these measurements “measure what they are supposed to measure”. Things are slightly different, however, in Qualitative research.

Reliability in qualitative studies is mostly a matter of “being thorough, careful and honest in carrying out the research” (Robson, 2002: 176). In qualitative interviews, this issue relates to a number of practical aspects of the process of interviewing, including the wording of interview questions, establishing rapport with the interviewees and considering ‘power relationship’ between the interviewer and the participant (e.g. Breakwell, 2000; Cohen et al., 2007; Silverman, 1993).

What seems more relevant when discussing qualitative studies is their validity , which very often is being addressed with regard to three common threats to validity in qualitative studies, namely researcher bias , reactivity and respondent bias (Lincoln and Guba, 1985).

Researcher bias refers to any kind of negative influence of the researcher’s knowledge, or assumptions, of the study, including the influence of his or her assumptions of the design, analysis or, even, sampling strategy. Reactivity , in turn, refers to a possible influence of the researcher himself/herself on the studied situation and people. Respondent bias refers to a situation where respondents do not provide honest responses for any reason, which may include them perceiving a given topic as a threat, or them being willing to ‘please’ the researcher with responses they believe are desirable.

Robson (2002) suggested a number of strategies aimed at addressing these threats to validity, being prolonged involvement , triangulation , peer debriefing , member checking ,  negative case analysis  and keeping an audit trail .

threats to validity.png

So, what exactly are these strategies and how can you apply them in your research?

Prolonged involvement refers to the length of time of the researcher’s involvement in the study, including involvement with the environment and the studied participants. It may be granted, for example, by the duration of the study, or by the researcher belonging to the studied community (e.g. a student investigating other students’ experiences). Being a member of this community, or even being a friend to your participants (see my blog post on the ethics of researching friends ), may be a great advantage and a factor that both increases the level of trust between you, the researcher, and the participants and the possible threats of reactivity and respondent bias. It may, however, pose a threat in the form of researcher bias that stems from your, and the participants’, possible assumptions of similarity and presuppositions about some shared experiences (thus, for example, they will not say something in the interview because they will assume that both of you know it anyway – this way, you may miss some valuable data for your study).

Triangulation may refer to triangulation of data through utilising different instruments of data collection, methodological triangulation through employing mixed methods approach and theory triangulation through comparing different theories and perspectives with your own developing “theory” or through drawing from a number of different fields of study.

Peer debriefing and support is really an element of your student experience at the university throughout the process of the study. Various opportunities to present and discuss your research at its different stages, either at internally organised events at your university (e.g. student presentations, workshops, etc.) or at external conferences (which I strongly suggest that you start attending) will provide you with valuable feedback, criticism and suggestions for improvement. These events are invaluable in helping you to asses the study from a more objective, and critical, perspective and to recognise and address its limitations. This input, thus, from other people helps to reduce the researcher bias.

Member checking , or testing the emerging findings with the research participants, in order to increase the validity of the findings, may take various forms in your study. It may involve, for example, regular contact with the participants throughout the period of the data collection and analysis and verifying certain interpretations and themes resulting from the analysis of the data (Curtin and Fossey, 2007). As a way of controlling the influence of your knowledge and assumptions on the emerging interpretations, if you are not clear about something a participant had said, or written, you may send him/her a request to verify either what he/she meant or the interpretation you made based on that. Secondly, it is common to have a follow-up, “validation interview” that is, in itself, a tool for validating your findings and verifying whether they could be applied to individual participants (Buchbinder, 2011), in order to determine outlying, or negative, cases and to re-evaluate your understanding of a given concept (see further below). Finally, member checking, in its most commonly adopted form, may be carried out by sending the interview transcripts to the participants and asking them to read them and provide any necessary comments or corrections (Carlson, 2010).

Negative case analysis is a process of analysing ‘cases’, or sets of data collected from a single participant, that do not match the patterns emerging from the rest of the data. Whenever an emerging explanation of a given phenomenon you are investigating does nto seem applicable to one, or a small number, of the participants, you should try to carry out a new line of analysis aimed at understanding the source of this discrepancy. Although you may be tempted to ignore these “cases” in fear of having to do extra work, it should become your habit to explore them in detail, as the strategy of negative case analysis, especially when combined with member checking, is a valuable way of reducing researcher bias.

Finally, the notion of keeping an audit trail refers to monitoring and keeping a record of all the research-related activities and data, including the raw interview and journal data, the audio-recordings, the researcher’s diary (see this post about recommended software for researcher’s diary ) and the coding book.

If you adopt the above strategies skilfully, you are likely to minimize threats to validity of your study. Don’t forget to look at the resources in the reference list, if you would like to read more on this topic!

Breakwell, G. M. (2000). Interviewing. In Breakwell, G.M., Hammond, S. & Fife-Shaw, C. (eds.) Research Methods in Psychology. 2nd Ed. London: Sage. Buchbinder, E. (2011). Beyond Checking: Experiences of the Validation Interview. Qualitative Social Work, 10 (1), 106-122. Carlson, J.A. (2010). Avoiding Traps in Member Checking. The Qualitative Report, 15 (5), 1102-1113. Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. 6th Ed. London: Routledge. Curtin, M., & Fossey, E. (2007). Appraising the trustworthiness of qualitative studies: Guidelines for occupational therapists. Australian Occupational Therapy Journal, 54, 88-94. Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic Inquiry. Newbury Park, CA: SAGE. Robson, C. (2002). Real world research: a resource for social scientists and practitioner-researchers. Oxford, UK: Blackwell Publishers.

Silverman, D. (1993) Interpreting Qualitative Data. London: Sage.

Jarek Kriukow

There is an argument for using your identity and biases to enrich the research (see my recent blog… researcheridentity.wordpress.com) providing that the researcher seeks to fully comprehend their place in the research and is fully open, honest and clear about that in the write up. I have come to see reliability and validity more as a defence of is the research rigorous, thorough and careful therefore is it morally, ethically and accurately defensible?

' src=

Hi Nathan, thank you for your comment. I agree that being explicit about your own status and everything that you bring into the study is important – it’s a very similar issue (although seemingly it’s a different topic) to what I discussed in the blog post about grounded theory where I talked about being explicit about the influence of our previous knowledge on the data. I have also experienced this dilemma of “what to do with” my status as simultaneously a “researcher” an “insider” a “friend” and a “fellow Polish migrant” when conducting my PhD study of Polish migrants’ English Language Identity, and came to similar conclusions as the ones you reach in your article – to acknowledge these “multiple identities” and make the best of them.

I have read your blog article and really liked it – would you mind if I shared it on my Facebook page, and linked to it from my blog section on this page?

Please do share my blog by all means; I’d be delighted. Are you on twitter? I’m @Nathan_AHT_EDD I strongly believe that we cannot escape our past, including our multiple/present habitus and identities when it comes to qualitative educational research. It is therefore, arguably, logical to ethically and sensibly embrace it/them to enrich the data. Identities cannot be taken on and off like a coat, they are, “lived as deeply committed personal projects” (Clegg, 2008: p.336) and so if we embrace them we bring a unique insight into the process and have a genuine investment to make the research meaningful and worthy of notice.

Hi Nathan, I don’t have twitter… I know – somehow I still haven’t had time to get to grips with it. I do have Facebook, feel free to find me there. I also started to follow your blog so that I am notified about your content. I agree with what you said here and in your posts, and I like the topic of your blog. This is definitely something that we should pay more attention to when doing research. It would be interesting to talk some time and exchange opinions, as our research interests seem very closely related. Have a good day !

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 16 September 2024

CVS-Q teen: an adapted, reliable and validated tool to assess computer vision syndrome in adolescents

  • Mar Seguí-Crespo 1 , 2   na1 ,
  • Natalia Cantó-Sancho 1   na1 ,
  • Mar Sánchez-Brau 1 &
  • Elena Ronda-Pérez 1 , 3  

Scientific Reports volume  14 , Article number:  21576 ( 2024 ) Cite this article

Metrics details

  • Eye diseases
  • Eye manifestations
  • Paediatric research
  • Paediatrics
  • Public health
  • Quality of life

Adolescents’ extensive use of digital devices raises significant concerns about their visual health. This study aimed to adapt and validate the computer vision syndrome questionnaire (CVS-Q © ) for adolescents aged 12–17 years. A mixed-method sequential design was used. First, a qualitative study was involved two nominal groups to assess the instrument’s acceptability. A subsequent cross-sectional quantitative study with 277 randomly selected adolescents assessed reliability and validity. Participants completed the adapted CVS-Q © , an ad hoc questionnaire, and the ocular surface disease index (OSDI) questionnaire. Repeatability was tested in 54 adolescents after 7–14 days. The Rasch-Andrich rating scale model was used. Instructions and symptoms were modified to obtain the 14-item CVS-Q teen © . It showed unidimensionality, no local dependence between items, and respected monotonicity. Adequate internal consistency (person reliability = 0.69, item reliability = 0.98) and intraobserver reliability (intraclass correlation coefficient = 0.77, Cohen's Kappa = 0.49) were observed. A significant correlation (0.782, p < 0.001) between CVS-Q teen © and OSDI supported construct validity. A score of ≥ 6 points indicated computer vision syndrome (CVS) (sensitivity = 85.2%, specificity = 76.5%, and area under the curve = 0.879). In conclusion, CVS-Q teen © is a valid and reliable instrument for assessing CVS in adolescents using digital devices, applicable in research and clinical practice for early identification and recommendations for visual health.

Similar content being viewed by others

internal reliability in qualitative research design refers to

Prevalence of computer vision syndrome: a systematic review and meta-analysis

internal reliability in qualitative research design refers to

Glaucoma in large-scale population-based epidemiology: a questionnaire-based proxy

internal reliability in qualitative research design refers to

Rasch analysis for development and reduction of Symptom Questionnaire for Visual Dysfunctions (SQVD)

Introduction.

The new information and communication technologies (NICTs) have spread worldwide in recent years, particularly among children and adolescents. 93.1% of people aged 10–15 years use a computer, 94.9% use the Internet and 69.5% have a smartphone 1 .

One of the health problems that results from prolonged use of digital devices is computer vision syndrome (CVS), which is defined as a group of problems related to eyes and vision 2 . These symptoms arise when demands exceed visual capabilities. Digital device use involves intense visual strain with continuous accommodation and convergence adjustments, leading to more symptoms in individuals with oculomotor anomalies or uncorrected refractive errors 3 . It also reduces blink frequency and amplitude which can cause ocular surface problems 4 . It has been observed that using electronic devices leads to an increase in incomplete blinks 5 . Also, CVS increases in those who have been using digital devices for more years 6 and more hours of daily use 7 .

Most studies on CVS focus on working populations 8 , 9 , 10 and some on university students 11 , 12 , 13 . CVS prevalence ranges from 50.0 to 70.0% in workers 8 , 9 , and can reach up to 90.0% among university students 11 . A literature review identified 10 studies on CVS in adolescents, all published in the last 6 years, mainly in Asia 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 . Reported prevalence ranges from 12.0 14 to 93.0% 23 . These studies used ad hoc questionnaires 15 , 16 , 17 , 22 , non-specific CVS tools 19 , or instruments designed and validated for adults or in other languages without prior adaptation and validation for adolescent 14 , 18 , 20 , 21 , 23 . Other studies have linked increased myopia and dry eye in children with time spent on computer games, mobile phones, and reduced outdoor activities 24 , 25 . Longer digital device use correlates with more severe CVS in adolescents 21 . Excessive use also contributes to sleep problems, anxiety, lack of social interaction, and depression, impacting health, development, and academic performance 26 .

Children and adolescents are more susceptible to visual problems due to patterns of use such as not taking breaks, inappropriate distances, and inadequate lighting 2 , 27 . These age groups are more vulnerable to excessive use of digital devices due to lack of self-control, entertainment, influence from family and friends, and academic tasks 27 . Educational centres’ increasing involvement with educational technologies has heightened this exposure. Technologies can enhance instruction quality by redistributing resources, increasing chances to practise, supplementing instructional time, and personalising instruction. They also engage and support learners by varying content representation, stimulating interaction, and prompting collaboration 28 . Thus, studies are needed that reflect the realities of schools, considering different exposure characteristics and demand periods.

In 2015 the computer vision syndrome questionnaire (CVS-Q © ) was designed and validated in Spanish to assess CVS in adults 29 . It is a patient reported outcome measure (PROM), a questionnaire that collects ocular and visual symptoms directly from the people who experience them 30 . It has been used in different adult populations 8 , 13 , 31 and translated, culturally adapted and validated in multiple languages 32 , 33 . However, there is no validated questionnaire for adolescents. Instruments for this age group need comprehensible terminology, symptomatology relevant to this population, ease of completion, and demonstrated validity and reliability. The aim of this study is to adapt and validate the CVS-Q © for adolescents aged 12 to 17 years.

The following research is based on the protocol published by Seguí-Crespo et al. 34 , which in turn was carried out following the guide for the adaptation and validation of health questionnaires by Ramada-Rodilla et al. 35 , except for the translation section, as the language is the same as that of the original questionnaire.

A mixed method sequential design was used. The process consisted of two phases, which in turn were subdivided into 6 steps in total (Fig.  1 ). In the first phase, qualitative data collection activities were conducted to adapt and assess the content and face validity of the instrument. In the second phase, quantitative data collection activities were conducted to assess the reliability, criterion and construct validity of the instrument through a cross-sectional study.

figure 1

Methodological development followed in obtaining the CVS-Q teen © for use with adolescents between 12 and 17 years of age.

Qualitative phase

The nominal group technique was applied in two groups: one formed by 8 adolescents between 12 and 17 years of age (4 females and 4 males) selected by convenience from different schools, and the other formed by 10 key informants.

Step 1: adaptation

In the first meeting, the 8 adolescents filled in the CVS-Q © and an analysis of its comprehensibility was carried out by means of an ad hoc questionnaire (terminology used, whether the instructions were understandable), and they also proposed changes and suggestions. Once these changes were incorporated into the original questionnaire, the V1 version of the adapted CVS-Q © was created.

Step 2: content validity

In the second meeting, V1 was presented to 4 eye care professionals (ophthalmologists and opticians-optometrists), 3 teachers and 3 parents of adolescents. As in step 1, they made the contributions they considered from their perspective. From this process, the V2 version of the adapted CVS-Q © was created.

Step 3: apparent validity

A pilot study was conducted with 31 adolescents between 12 and 17 years of age to confirm the quality of the adaptation and to verify practical aspects of its application. The sample was selected by non-probability snowball sampling. Participants completed V2 of the CVS-Q © and an ad hoc questionnaire that included socio-demographic data, questions about the comprehensibility of the instructions and symptoms, whether to add or remove any symptom, about the difficulty of the structure of the questionnaire and the way of answering, and about the possibility of improving any other aspect. Once the instructions, symptoms and response options had been reviewed, the acceptability of the instrument was measured through a qualitative analysis by grouping common discourses. According to the literature, it was considered inadequate if more than 15% of participants expressed difficulties or suggested changes 35 . This resulted in the V3 version of the adapted CVS-Q © .

Quantitative phase

To validate V3, a cross-sectional study was conducted in 277 adolescents 36 , aged 12–17 years of age, randomly selected from 2 public schools and 1 subsidised school (A, B and C respectively), two of which used printed textbooks and one digital textbook. All completed V3 of the CVS-Q © and an ad hoc questionnaire with socio-demographic information (sex, age), academic information (educational institution, year and textbooks), and their use of digital devices for studying. In addition, they underwent a visual examination, which included visual acuity in mono and binocular distance vision, cover/uncover test, Hirschberg reflex and eye movements. Students who did not achieve a monocular visual acuity of 0.0 logMAR or who had any manifest ocular alteration (such as ocular pathology or the presence of tropia) were excluded from the study. In total, 15 adolescents were excluded: 8 did not achieve 0.0 logMAR visual acuity with their usual optical compensation and 7 had strabismus. The adolescents included in the study had a monocular visual acuity of 0.06 ± 0.05 logMAR in the right eye, 0.06 ± 0.04 logMAR in the left eye, and 0.07 ± 0.04 logMAR binocularly. All participants demonstrated a normal Hirschberg reflex and eye movements.

Step 4: construct validity and internal consistency

The basic Rasch-Andrich rating scale model, the rating scale model (RSM) 37 , was used and the following properties were assessed:

Item and person fit to the predictions of the Rasch model. This was assessed using the mean squares (MNSQ) infit and outfit statistics; a range between 0.60 and 1.40 suggests a good fit 37 . Outfit MNSQ values > 2.00 should be dropped as they indicate inaccurate measurement 37 .

Item polarity. Assessed by inter-item correlations. These should be positive and away from 0 (or, alternatively, the observed correlation should be similar to the expected one), which will confirm that it is not necessary to eliminate any item.

Empirical measure of item category. Monotonicity is assessed if all response categories are represented for each item and classified according to their level of severity.

Performance of the rating scale. It is assessed whether between the different thresholds of the response probability curves there is a minimum separation of 1.40 logits 37 . In the case of the CVS-Q © there are two severity thresholds, a threshold between categories 0 and 1 and a threshold between categories 1 and 2.

Dimensionality of the questionnaire and local dependence of the items. Dimensionality is assessed using principal component analysis of Rasch residuals. For unidimensionality to exist, the variance unexplained by the first contrast must be < 10.0% and the eigenvalue of the first contrast must be < 1.90. The Yen-Q3 statistic was calculated to assess the local independence of the items. Any residual correlation with a value higher than 0.20 of the mean correlation could indicate local dependence 38 .

Measurement error. The information function of the questionnaire (and its reciprocal, the standard error measurement, SEM) is generated. This function describes the variation of the accuracy of the questionnaire along the latent trait and allows us to know the areas of highest accuracy of the instrument.

Internal consistency and person-item separation index. An internal consistency for persons ≥ 0.70 39 is considered good and for items it should be > 0.90 40 . The person separation index should be > 2.00 logits for persons and > 3.00 logits for items.

Targeting. The appropriateness of the severity level of the items to the sample is assessed. A good alignment between items and persons occurs when the mean scores of the persons are close to 0 logits. A difference of more than 1 logit may indicate poor targeting 41 .

Analysis of differential item functioning (DIF) and its impact on questionnaire scores. This assesses whether the way in which items define a measurement scale is the same for different groups 37 . It was analysed according to sex, academic year, school and textbook. An item was considered to have DIF if the between-group contrast (DIF size) was > 0.64 and the t-Rasch-Welch test value was significant at the 0.05 level after Bonferroni correction 37 . The proportion of estimates that differed by > 0.50 logits was calculated as an indicator of the impact of DIF on the scores.

In addition, to further investigate construct validity, a convergent validity study was conducted using the ocular surface disease index (OSDI) questionnaire, as it has some items similar to those of the CVS related to dry eye 42 . After testing for normality, we analysed the difference in the scores obtained between the two questionnaires (Student's t-test) and the difference in the prevalence of CVS (Chi-square) in adolescents with and without dry eye symptoms.

Step 5: test–retest reliability

Between 7 and 14 days after the first measurement, a random subsample of 54 adolescents completed the adapted CVS-Q © V3 again. The intraclass correlation coefficient (ICC) based on a mixed-effects model with a measure of absolute agreement was calculated for questionnaire scores, and Cohen's Kappa Index (k), with a corresponding 95% confidence interval (95%CI), was calculated for differences in CVS diagnosis. The acceptable level of ICC is ≥ 0.70 39 and for k a ≤ 0 was considered as indicating no agreement, 0.01–0.20 as none to slight, 0.21–0.40 as fair, 0.41- 0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1.00 as almost perfect agreement 43 . In addition, mean scores were compared between the two administrations (Student’s t -test for paired data).

Step 6: criterion validity

The same criterion used by the authors of the original questionnaire "occurrence of at least one symptom two or three times a week" was used to define the presence of CVS 29 . Sensitivity and specificity were calculated, allowing the receiver operating characteristic (ROC) curve to determine the diagnostic performance of the questionnaire and the cut-off point or score from which to consider the adolescent symptomatic.

The statistical programmes SPSS version 28, Winsteps version 5.2.5.1 and Jamovi 2.2.5 were used to perform the analyses.

Ethics declaration

This study was approved by the Research Ethics Committee of the University of Alicante (UA-2020-01-13). It has been conducted in accordance with Good Clinical Practice standards and the applicable international ethical principles for human research, as per the latest revision of the Helsinki Declaration. The data collected in the study have been processed in accordance with the current legislation on date protection. All students who participated in each phase were provided with a written participant information sheet, and the signature of informed consent was sought from the students and, depending on their age, from their parents/guardians.

All the partial modifications of the symptoms that were introduced by the two nominal groups consulted and as a result of the pilot study can be consulted in Supplementary Table S1 online. The instructions were also modified and adapted to the proposals made. In the pilot study, 96.8% of the adolescents found the instructions easy to understand. 74.2% indicated a good comprehension of the symptoms and 8 students reported difficulties in understanding a symptom, with "coloured halos around objects" being the most difficult (16.1%). This resulted in the adapted CVS-Q teen © questionnaire, which had 14 symptoms.

In the first RSM model it was observed that out of the total sample of 262 adolescents, 11 exceeded the cut-off value for outfit MNSQ (range: 2.15–3.67), so they were removed from the model and the analyses were repeated. In this second model the infit and outfit MNSQ values were within the established range (mean infit/outfit MNSQ = 0.99 ± 0.02). Therefore, the final sample included was n = 251 adolescents; its characteristics are reflected in Table 1 .

The fit of the items to the predictions of the model was found to be within the established range (mean infit MNSQ = 1.01 ± 0.04 and mean outfit MNSQ = 0.99 ± 0.04). No negative correlations were observed between items (range: 0.28–0.60) and the expected correlation was very similar to the observed correlation for most items (Table 2 ). Item 14 (headache) is the symptom that adolescents perceive with the least severity and item 2 (gritty feeling in eye/eyes) as the most severe, followed by double vision (item 10). The questionnaire respected monotonicity for all items and, in addition, the three severity categories are represented and ordered (Supplementary Fig. S1 online).

The thresholds of the rating scale progressed monotonically, with a separation between thresholds of 3.14 logits (Fig.  2 ), as well as the mean scores per category (− 2.51, − 1.26 and − 0.21 for categories 0, 1 and 2, respectively). The infit and outfit values of the response categories were also good (infit: 0.99, 1.02 and 0.96; outfit: 0.99, 1.00 and 0.94; for response categories 0, 1 and 2, respectively).

figure 2

Response probability curves by category for the CVS-Q teen © .

The first contrast had an eigenvalue of 1.70, and its proportion of unexplained variance was 8.7%, thus corroborating the unidimensionality of the questionnaire. The cut-off point for determining the local independence of the items was 0.299, the mean of the correlations being 0.099. In this case, no residual correlation exceeded this value, so local independence of all items is assumed.

The zone of highest accuracy of the CVS-Q teen © lies in the interval between − 0.73 (raw score = 11 points) and 1.20 logits (raw score = 19 points), with a SEM of 0.50 (Supplementary Fig. S2 online).

The internal consistency analysis showed a person separation reliability value of 0.69, an item reliability value of 0.98, and a person separation index of 1.49 logits and item reliability of 6.35 logits.

With respect to targeting, the mean of the individuals' scores was -1.90 (SD = 0.57). This indicates that the items express more severity than the severity expressed by individuals; the questionnaire lacks items at the lower levels of the latent trait (Fig.  3 ).

figure 3

Wright map of the CVS-Q teen © .

Table 3 presents the results of the DIF analysis. There was no DIF according to course, school or whether the school used printed books or digital books. According to sex, item 5 (eye redness) showed DIF (DIF size = 1.30, t-Rasch-Welch < 0.004), although its impact on the questionnaire scores was low.

Convergent validity analysis showed a Spearman correlation coefficient 0.782 (p < 0.001). 37.3% of the adolescents had neither CVS nor dry eye symptoms, while 33.1% exceeded the cut-off point on both questionnaires. A significant association was observed between the variables "presence/absence of CVS" and "presence/absence of dry eye symptoms" (p < 0.001).

Test–retest reliability showed an ICC = 0.77 for the questionnaire scores, and a k = 0.49 for the diagnosis of CVS between both administrations; there was also no difference in the means of the questionnaires (p = 0.491).

The value that optimised both sensitivity and specificity was the cut-off point of − 2.06 logits, which is equivalent to a raw score of 6 points. With this cut-off point, the questionnaire has a sensitivity of 85.2%, a specificity of 76.5% and an area under the curve (AUC) of 0.879 with 95% CI 0.836–0.922 (Supplementary Fig. S3 online).

The CVS-Q teen © , presented in Supplementary Fig. S4 online (translated into English in Fig. S5 ), is a 14-symptom self-administered questionnaire asking the frequency and intensity of ocular and visual symptoms related to the use of digital devices in this population. The frequency and intensity data are recorded to calculate the severity of each symptom. Summing the severities gives an overall score. An adolescent who obtains an overall score on the CVS-Q teen ©  ≥ 6 points will have CVS.

Conversion of CVS-Q teen © raw scores to more accurate Rasch scores in logits (Supplementary Table S2 online) may be useful for researchers conducting clinical studies in which, for example, small variations in CVS are detected, and which require a higher precision of the instrument 37 .

The CVS-Q teen © is the first questionnaire adapted and validated to assess CVS in adolescents. It is a PROM that collects ocular and visual symptoms directly from the adolescents who experience them. It presents a good fit of items and individuals to the predictions of the model. It is unidimensional, has good reliability, diagnostic capacity and test–retest repeatability.

When comparing both questionnaires (CVS-Q © vs. CVS-Q teen © ), it is observed that the total number of items in the adolescent questionnaire is lower (16 vs. 14), the cut-off point being the same, although this differs for other linguistic versions that are also derived from the original 29 . It is observed that in both cases the item and person fit values are adequate. The adolescent version obtains higher values for sensitivity, specificity and AUC (sensitivity = 75.0% vs. 85.2%; specificity = 70.2% vs. 76.5%; AUC = 0.826 vs. 0.879), indicating that this questionnaire has a slightly better internal performance. Both questionnaires have poor targeting. Both seem to lack mild symptoms. This indicates that the CVS-Q teen © is better at differentiating adolescents with moderate and severe symptoms, as the zone of highest accuracy is in the range between 11 and 19 points. In this case, this is a logical feature of a suitable instrument for assessing CVS, rather than a deficiency of the questionnaire, as lower severities are considered to be of little clinical relevance.

As a limitation in the interpretation of our findings, it should be taken into account that the validation fieldwork was developed after COVID-19. In this period, there may have been an increase in symptomatology and even an overestimation of exposure to digital devices by adolescents. To address this issue, initially, we considered using an application to objectively measure exposure, but we faced several drawbacks. These included ethical concerns about introducing software onto personal devices and the possibility that adolescents might disconnect the application to avoid being monitored. On the other hand, it would have been preferable to have a homogeneous distribution of adolescents across different academic years. However, the DIF analysis indicated that academic year does not influence the perception of symptomatology in this case. In terms of strengths, it should be noted that a systematic, rigorous process has been followed, based on guidelines published in scientific literature, and in addition, two of the authors of the original questionnaire participated.

CVS-Q teen © , with good psychometric properties, effectively identifies cases of CVS in a reliable, valid, and straightforward manner, facilitating appropriate approaches and making possible to assess CVS in the adolescent population. Considering the high exposure to digital device use among this population, the CVS-Q teen © (1) will help to determine the prevalence of this syndrome, (2) will increase the knowledge on how the use of digital devices can affect young people, (3) will allow comparisons between adolescents with different level of exposure to digital devices, among other.

Healthcare professionals should make suitable recommendations for prevalent cases and assist in decision-making. Additionally, the CVS-Q teen © can be used in research contexts, such as studying contact lens wearers, to compare different lens types, as previous studies conducted with adults 8 , 44 .

On the other hand, as observed in the CVS-Q © , a specific feature of the CVS-Q teen © is its increased accuracy in detecting individuals with moderate to severe symptoms, while providing limited information at the lower end of the CVS construct 29 . Since this questionnaire assesses visual and ocular symptomatology, identifying moderate or severe cases is considered more relevant because these cases require intervention due to their greater symptom severity. In contrast, individuals with mild symptoms may be less responsive to recommendations or treatment, making the identification of severe symptoms more pertinent to clinical practice. Therefore, this characteristic should be viewed as an intentional aspect of the clinical measure rather than a limitation of the scale.

In the future, it would be highly valuable to conduct research with the CVS-Q teen © that also examines the correlation between self-reported symptoms in adolescents and the presence of objectively assessed signs from clinical tests. It is important to note that this questionnaire is specific for the Spanish adolescent population. Further research is required to validate the CVS-Q teen © in other cultural and linguistic contexts, as was done with the original CVS-Q © 32 , 33 , 45 .

Data availability

The datasets that support the findings of the current study are available from the corresponding author MSB on reasonable request.

Instituto Nacional de Estadística. Computer, Internet, and mobile phone use by sex, age, habitat, household size, household type, and net monthly household income. https://www.ine.es/jaxi/Datos.htm?tpx=55135 .

American Optometry Association. Computer Vision Syndrome. https://www.aoa.org/healthy-eyes/eye-and-vision-conditions/computer-vision-syndrome?sso=y .

Sheppard, A. L. & Wolffsohn, J. S. Digital eye strain: Prevalence, measurement and amelioration. BMJ Open Ophthalmol. 3 , e000146 (2018).

Article   PubMed   PubMed Central   Google Scholar  

Choi, J. H. et al. The influences of smartphone use on the status of the tear film and ocular surface. PLoS One 13 , e0206541 (2018).

Argilés, M. et al. Physiology and pharmacology blink rate and incomplete blinks in six different controlled hard-copy and electronic reading conditions. Investig. Ophthalmol. Vis. Sci. 56 , 6679–6685 (2015).

Article   MATH   Google Scholar  

Tesfaye, A. H. et al. Prevalence and associated factors of computer vision syndrome among academic staff in the University of Gondar, Northwest Ethiopia: An institution-based cross-sectional study. Environ. Health Insights 16 , 11786302221111864 (2022).

Article   PubMed   PubMed Central   MATH   Google Scholar  

Filon, F. L. et al. Video display operator complaints: A 10-year follow-up of visual fatigue and refractive disorders. Int. J. Environ. Res. Public Health 16 , 2501 (2019).

Tauste, A. et al. Effect of contact lens use on computer vision syndrome. Ophthalmic Physiol. Opt. 36 , 112–119 (2016).

Article   PubMed   MATH   Google Scholar  

Dessie, A. et al. Computer vision syndrome and associated factors among computer users in Debre Tabor Town, Northwest Ethiopia. J. Environ. Public Health 2018 , 4107590 (2018).

Uba-Obiano, C. U. et al. Self-reported computer vision syndrome among bank workers in Onitsha, Nigeria. J. West Afr. Coll. Surg. 12 , 71 (2022).

Reddy, S. et al. Computer vision syndrome: A study of knowledge and practices in university students. Nepal. J. Ophthalmol. 5 , 161–168 (2013).

Article   PubMed   CAS   MATH   Google Scholar  

Iqbal, M. et al. Computer vision syndrome survey among the medical students in Sohag University Hospital, Egypt. Ophthalmol. Res. Int. J. 8 , 1–8 (2018).

Cantó-Sancho, N. et al. Computer vision syndrome prevalence according to individual and video display terminal exposure characteristics in Spanish university students. Int. J. Clin. Pract. 75 , e13681 (2021).

Article   PubMed   Google Scholar  

Li, L. et al. Contribution of total screen/online-course time to asthenopia in children during COVID-19 pandemic via influencing psychological stress. Front. Public Health 9 , 736617 (2021).

Buabbas, A. J., Al-Mass, M. A., Al-Tawari, B. A. & Buabbas, M. A. The detrimental impacts of smart technology device overuse among school students in Kuwait: A cross-sectional survey. BMC Pediatr. 20 , 524 (2020).

Ichhpujani, P. et al. Visual implications of digital device usage in school children: A cross-sectional study. BMC Ophthalmol. 19 , 76 (2019).

Bogdănici, C. M., Săndulache, D. E. & Nechita, C. A. Eyesight quality and computer vision syndrome. Rom. J. Ophthalmol. 61 , 112–116 (2017).

Mohan, A. et al. Prevalence and risk factor assessment of digital eye strain among children using online e-learning during the COVID-19 pandemic: Digital eye strain among kids (DESK study-1). Indian J. Ophthalmol. 69 , 140–144 (2021).

Article   ADS   PubMed   MATH   Google Scholar  

Junghans, B. M., Azizoglu, S. & Crewther, S. G. Unexpectedly high prevalence of asthenopia in Australian school children identified by the CISS survey tool. BMC Ophthalmol. 20 , 408 (2020).

Abuallut, I. et al. Prevalence of computer vision syndrome among school-age children during the COVID-19 pandemic, Saudi Arabia: A cross-sectional survey. Children (Basel) 9 , 1718 (2022).

PubMed   MATH   Google Scholar  

Seresirikachorn, K. et al. Effects of digital devices and online learning on computer vision syndrome in students during the COVID-19 era: An online questionnaire study. BMJ Paediatr. Open 6 , e001429 (2022).

Ekemiri, K. et al. Online e-learning during the COVID-19 lockdown in Trinidad and Tobago: Prevalence and associated factors with ocular complaints among schoolchildren aged 11–19 years. PeerJ 10 , e13334 (2022).

Gupta, R., Chauhan, L. & Varshney, A. Impact of E-schooling on digital eye strain in coronavirus disease era: A survey of 654 students. J. Curr. Ophthalmol. 33 , 158–164 (2021).

Williams, K. M. et al. Early life factors for myopia in the British Twins Early Development Study. Br. J. Ophthalmol. 103 , 1078–1084 (2019).

Moon, J. H., Kim, K. W. & Moon, N. J. Smartphone use is a risk factor for pediatric dry eye disease according to region and age: A case control study. BMC Ophthalmol. 16 , 188 (2016).

Dewi, R. K., Efendi, F., Has, E. M. M. & Gunavan, J. Adolescents’ smartphone use at night, sleep disturbance and depressive symptoms. Int. J. Adolesc. Med. Health 33 , 20180095 (2018).

Article   Google Scholar  

Toh, S. H. et al. “From the moment I wake up I will use it…every day, very hour”: A qualitative study on the patterns of adolescents’ mobile touch screen device use from adolescent and parent perspectives. BMC Pediatr. 19 , 30 (2019).

United Nations Educational & Scientific and Cultural Organization. Global Education Monitoring Report 2023: Technology in education - A tool on whose terms? Paris, UNESCO. https://www.unesco.org/gem-report/en .

Seguí, M. M. et al. A reliable and valid questionnaire was developed to measure computer vision syndrome at the workplace. J. Clin. Epidemiol. 68 , 662–673 (2015).

Churruca, K. et al. Patient-reported outcome measures (PROMs): A review of generic and condition-specific measures and a discussion of trends and issues. Health Expect. 24 , 1015–1024 (2021).

Sánchez-Brau, M. et al. Computer vision syndrome in presbyopic digital device workers and progressive lens design. Ophthalmic Physiol. Opt. 41 , 922–931 (2021).

Cantó-Sancho, N., Seguí-Crespo, M., Zhao, G. & Ronda-Pérez, E. The Chinese version of the computer vision syndrome questionnaire: Translation and cross-cultural adaptation. BMC Ophthalmol. 23 , 298 (2023).

Cantó-Sancho, N. et al. Rasch-validated Italian scale for diagnosing digital eye strain: The computer vision syndrome questionnaire IT © . Int. J. Environ. Res. Public Health 19 , 4506 (2022).

Seguí-Crespo, M. et al. CVS-Q teen © : Computer vision syndrome in adolescents and its relationship with digital textbooks. Gac. Sanit. 37 , 102264 (2022).

Ramada-Rodilla, J. M., Serra-Pujadas, C. & Delclós-Clanchet, G. L. Cross-cultural adaptation and health questionnaires validation: Revision and methodological recommendations. Salud Publ. Mex. 55 , 57–66 (2013).

Martin, C. R. & Hollins Martin, C. J. Minimum sample size requirements for a validation study of the birth satisfaction scale-revised (BSS-R). J. Nurs. Pract. 1 , 25–30 (2017).

MATH   Google Scholar  

Boone, W. J., Staver, J. R. & Yale, M. S. Rasch Analysis in the Human Sciences 1st edn. (Springer, 2014).

Book   MATH   Google Scholar  

Christensen, K. B., Makransky, G. & Horton, M. Critical values for Yen’s Q3: Identification of local dependence in the Rasch model using residual correlations. Appl. Psychol. Meas. 41 , 178–194 (2017).

Prinsen, C. A. C. et al. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual. Life Res. 27 , 1147–1157 (2018).

Article   PubMed   PubMed Central   CAS   MATH   Google Scholar  

Bond, T. Applying the Rasch Model: Fundamental Measurement in the Human Sciences 3rd edn. (Routledge, 2015).

Stelmack, J. et al. Use of Rasch person-item map in exploratory data analysis: A clinical perspective. J. Rehabil. Res. Dev. 41 , 233–241 (2004).

Wolffsohn, J. S. et al. TFOS DEWS II diagnostic methodology report. Ocul. Surf. 15 , 539–574 (2017).

McHugh, M. L. Interrater reliability: The kappa statistic. Biochem. Med. (Zagreb) 22 , 276–282 (2012).

Seguí-Crespo, M. M., Ronda-Pérez, E., Yammouni, R., Arroyo Sanz, R. & Evans, B. J. W. Randomised controlled trial of an accommodative support lens designed for computer users. Ophthalmic Physiol. Opt. 42 , 82–93 (2022).

Qolami, M., Mirzajani, A., Ronda-Pérez, E., Cantó-Sancho, N. & Seguí-Crespo, M. Translation, cross-cultural adaptation and validation of the computer vision syndrome questionnaire into Persian (CVS-Q FA © ). Int. Ophthalmol. 42 , 3407–3420 (2022).

Download references

This work was supported by the call “Health Research Projects”, Health Research Fund of the Institute of Health Carlos III, Ministry of Science and Innovation and European Union, through European Regional Development Fund (ERDF) “ A way to make Europe” [PI20/01629].

Author information

Theseauthors are joint lead authors: Natalia Canto-Sancho and Mar Seguí-Crespo.

Authors and Affiliations

Public Health Research Group, University of Alicante, San Vicente del Raspeig, Spain

Mar Seguí-Crespo, Natalia Cantó-Sancho, Mar Sánchez-Brau & Elena Ronda-Pérez

Department of Optics, Pharmacology and Anatomy, University of Alicante, San Vicente del Raspeig, Spain

Mar Seguí-Crespo

Biomedical Research Networking Center for Epidemiology and Public Health (CIBERESP), Madrid, Spain

Elena Ronda-Pérez

You can also search for this author in PubMed   Google Scholar

Contributions

MSC and ERP were responsible for the conception and design of the study, and for acquiring funding. MSB was responsible for the data collection. NCS and MSB performed the formal analysis. All authors contributed to the interpretation of the data and the drafting and revising of the manuscript, as well as reading and approving the submitted version.

Corresponding author

Correspondence to Mar Sánchez-Brau .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Seguí-Crespo, M., Cantó-Sancho, N., Sánchez-Brau, M. et al. CVS-Q teen: an adapted, reliable and validated tool to assess computer vision syndrome in adolescents. Sci Rep 14 , 21576 (2024). https://doi.org/10.1038/s41598-024-70821-9

Download citation

Received : 27 March 2024

Accepted : 21 August 2024

Published : 16 September 2024

DOI : https://doi.org/10.1038/s41598-024-70821-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Computer vision syndrome
  • Questionnaire
  • Digital devices
  • Psychometrics
  • Validation study

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

internal reliability in qualitative research design refers to

  • --> Try Free Downloads
  • Marketplace
  • Choose Exam Goal
  • About Eduncle
  • Announcements

search

Speak With a Friendly Mentor.

  • My Wishlist
  • Subscribe Exams
  • Try Free Downloads
  • Institution
  • Payment Terms
  • Refund Policy
  • Ask Support

Contact info

Head Office: MPA 44, 2nd floor, Rangbari Main Road, Mahaveer Nagar II, Kota (Raj.) - 324005

Corporate Office: Office No: 702 (7th Floor), Shree Govind Business Tower, Gautam Marg, Vaishali Nagar, Jaipur (Raj.) – 302021

Mail: [email protected]

  • Eligibility
  • Paper Pattern
  • Application Form
  • IIT JAM Exam
  • Question Papers
  • Preparation Tips
  • UGC NET Exam
  • Answer Keys
  • CSIR NET Exam
  • Syllabus & Paper Pattern
  • Question Paper

whatsapp-btn

Do You Want Better RANK in Your Exam?

Start Your Preparations with Eduncle’s FREE Study Material

  • Updated Syllabus, Paper Pattern & Full Exam Details
  • Sample Theory of Most Important Topic
  • Model Test Paper with Detailed Solutions
  • Last 5 Years Question Papers & Answers

Sign Up to Download FREE Study Material Worth Rs. 500/-

I agree to the Terms and Conditions

I agree to receive exam notifications via WhatsApp.

Wait Wait Wait... !

We Have Something Special for YOU

Download FREE Study Material Designed by Subject Experts & Qualifiers

Want Enhanced Learning Experience For Exam Preparation?

  • Ask Your Doubts and Get Them Answered by Exam Experts & Students' Community Members Across India
  • Regular Guidance, Mentorship & Study Tips by Eduncle Experts
  • Quality Content with More Than 300 Courses in Multiple Exams Curated by Experts

Enter your mobile number to get the download link.

internal reliability in qualitative research design refers to

Learning & Teaching App

internal reliability in qualitative research design refers to

Skyrocket Your Chances to RANK HIGHER in the Exam

 Time management is very much important in IIT JAM. The eduncle test series for IIT JAM Mathematical Statistics helped me a lot in this portion. I am very thankful to the test series I bought from eduncle.

 Eduncle served as my guiding light. It has a responsive doubt solving team which solves & provides good solutions for your queries within 24 hours. Eduncle Mentorship Services guides you step by step regarding your syllabus, books to be used to study a subject, weightage, important stuff, etc.

 The General Aptitude part of Eduncle study materials were very good and helpful. Chapters of the Earth Science were also very satisfactory.

 The study material of Eduncle helps me a lot. The unit wise questions and test series were helpful. It helped me to clear my doubts. When I could not understand a topic, the faculty support too was good. Thanks Eduncle.

 I recommend Eduncle study material & services are best to crack UGC-NET exam because the material is developed by subject experts. Eduncle material consists a good no. of ques with online test series & mock test papers.

 I am truly Statisfied with study material of Eduncle.com for English their practise test paper was really awsome because it helped me to crack GSET before NET. Thanks Team of eduncle.

Request a Call back

Let Our Mentors Help You With the Best Guidance

internal reliability in qualitative research design refers to

We have Received Your Query

Are you sure you want to Unfollow ?

internal reliability in qualitative research design refers to

How can we assist you?

internal reliability in qualitative research design refers to

Oops! You Can’t Unfollow Your Default Category.

internal reliability in qualitative research design refers to

Your profile has been successfully submitted

Kindly give us 1 - 3 week to review your profile. In case of any query, write to us at [email protected]

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

behavsci-logo

Article Menu

internal reliability in qualitative research design refers to

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

“ it is as if i gave a gift to myself ”: a qualitative phenomenological study on working adults’ leisure meaning, experiences, and participation.

internal reliability in qualitative research design refers to

1. Introduction

1.1. leisure definition, 1.2. leisure participation and meaning, 1.3. leisure in working adults, 1.4. leisure and well-being, 1.5. flow and leisure, 1.6. leisure as a right and occupational justice, 1.7. objective of the study, 2. materials and methods, 2.1. study design, 2.2. sampling and participant recruitment, 2.3. instrument, 2.4. data collection, 2.5. data analysis, 3.1. sociodemographic data of participants, 3.2. qualitative data analysis results.

  • Meaning of leisure
“ When I say free time, it makes people feel like I have an obligation and you’re getting rid of it. For example, use your free time as if you were under arrest and go out to the courtyard. We live with responsibility and anxiety, and when we feel happy, it feels like being free. ” (Dilara, 26 yo, F, Psychologist, lives in Aksaray province)
“ I do my leisure on Sundays… I do it by putting them in order. I can’t do it due to busy weekdays. Now I have to plan, as I have very little time left. I do what I want; everything I do, I do willingly. ”
“ Free time is like a time period when I watch something meaningless on Netflix that will completely empty my mind. ”
“ A human being has feelings and emotions; they are not merely like machines. He desires to use time in a different way. Some people do this through walking, while others do it through other activities, such as making art or having fun with their kids. ”
“ In my opinion, what distinguishes me from a departed person—what sets me apart from someone who has passed away—are my leisure activities… If I only come and go between work and home for the next 30 years, I would consider this to be a life I have never experienced. The process is analogous to story writing. ”
“ When I think of leisure, I think of things where I can be alone with myself and do stuff with quality. You can do what you want to do, and it is your moments of pleasure that you set aside for yourself. ”
“ I give up my sleep in the morning. I’ll go to work early and make myself coffee. I motivate myself there. No matter how busy you are, you can always find a few minutes to yourself. ” Neriman (44 yo, Government Official, F, lives in Istanbul province)
“ I am very relaxed (took a deep breath and exhaled). I mean, when I do something outside of work and outside of the normal routine, if we go out, I feel such a relief. ”
“ I define leisure as things you can do to improve yourself. (Ozan, 31 yo, M, Engineer, lives in Istanbul province)
“ Leisure is also considered the activity that people do to renew themselves and complete their personal development. ” (Ahmet, 31 yo, M, Engineer, lives in Düzce province)
“ I feel mentally relaxed with that. If I do something on the weekend and forget what I did on Friday, I am happy to try to remember it on Monday morning. I try to provide mental relaxation. (Ali, 37 yo, Engineer, lives in Istanbul province)
“ I think of leisure as the time when people can relax. But this rest should include not only physical but also mental rest. Resting is actually being able to calm down for me. ” (Sude, Speech and Language Therapist, 26 yo, F, lives in Aydın province)
“ We can say that it is a work-related problem, because if I had a few more days of annual leave, I could go to Eskişehir province (her hometown) to visit my family and friends. When I can’t participate in my leisure, that bothers me the most. There is really limited time after work. Sometimes I feel so bad when I can’t do the things I want to do. There are times when I even get sleepy and postpone going to sleep for the sake of leisure time ” (Sude, Speech and Language Therapist, 26 yo, F, lives in Aydın province)
“ My husband is my biggest facilitator for my leisure time and my life too. ” (Fatma, 41 yo, Government Official, F, lives in Mersin province)
“ It makes easier to have an understanding partner. The circle of friends makes it easier. ” (Orhan, 38 yo, Teacher, M, lives in Muş province)
“ I also have colleagues who are much older than me. It already creates a generational problem with them. Your expectations and wishes are different. Other than that, I am the only one who is single; everyone is married. That’s why nothing happens. People are constantly involved in their own plans. ”
“ It is close, which makes my leisure time easier… I also have a bicycle. I reach there in ten minutes. As a facilitator, it leaves time for what I will do. ”
“ There is a significant difference between before and after I have my own car.” If you do not have a car, you are going places by taxi. For a woman, having her own car is a wonderful thing. It’s great to have that key in your pocket. ”
“ It is very different because of the city I live in. For example, when I was in North Cyprus, everything was different at night. Now that I leave work at 6 p.m., it’s eight until I say, come home and eat. What can I do after 8 p.m.? It’s a small city, after all. ” Dilara (26 yo, Psychologist, F)
“ Of course, there are obstacles, especially because I have a problem allocating time…I remember feeling very good when I was able to have me time, and sometimes I miss it. You know, it’s good to get married, but there is also a reverse side to getting married: you have to transfer your leisure time to the family. I spend time with my family. Since our child is younger, we cannot participate in many social activities. ” (Ahmet, 31 yo, M, Engineer, lives in Düzce province)
“ Because my wife is taking care of our young children, she cannot participate much in her leisure time… In the evenings, we-as parents- prepare meals for the children, play games, and help with their homework. Responsibilities continues after work. ” (Orhan, 38 yo, Teacher, M, lives in Muş province)
“ I feel very good about doing something for myself. I listen to myself… I am very happy with the value that I give to myself… I think it gives you a lot of pleasure. It’s like happiness and pride combined. ” (Dilara, 26 yo, F, Psychologist, lives in Aksaray province)
“I feel the pleasure of this happiness .” (Seda, 27 yo, F, Teacher, lives in Hakkari province)
“It is as if I gave a gift to myself.” (Güneş, 31 yo, Lecturer, lives in Istanbul province)
“ Leisure is legendary for me. I’ve never been ahead of it; I’ve always tried to do it, but I wasn’t upset when I couldn’t. If I couldn’t today, let today pass; I’ll do it tomorrow. I say that this is how it should be, and I say that there is good in it.I am happy. Isn’t that the purpose of life? ” (Şahin, Basketball Coach, 34 yo, lives in Kilis province)
“ Leisure keeps me motivated. Life is what drives me. You live in a world of ups and downs. Let me tell you, I love to be happy. ”
“ When I can’t do something or when I can’t do something with my friends, I get restless. I’m concentrating on doing that job. It gives me uneasiness. Because doing it gives me peace of mind. It gives me restlessness and unhappiness when I can’t do it…I am losing my mood; my energy is low. ” (Gökçe, 35 yo, Lecturer, F, lives in Istanbul province)
“ For example, that day, I get very uneasy if I can’t read the Holy Quran first… But when I don’t read books, I am very angry with myself. I say you left yourself behind, Zehra, again…I think I left myself behind. I feel very sad. For example, I think that I don’t take time for myself when I go for a walk. Again, I say you ignored yourself, Zehra. ”
“ Of course, when you can’t participate to your leisure time there is boredom, both because you can’t do it and because your time is wasted ”
“ When I can’t meet my friends, I sometimes feel good, but generally I feel incomplete. I feel restless when I don’t have time for myself. Even if I am not so tired, I still feel like I am not fully mentally rested.”
“ When I think the times, I cannot able to do my leisure…For instance, I think about myself in quarantine for COVID disease. I stand like this and wait to be picked. I am sour. The front of my house is open and has a view of a field). I looked straight ahead. But it’s meaningless. I probably wouldn’t want my whole life to be like this. It’s like a Nuri Bilge Ceylan movie. ” (Özge, 30 yo, F, Research Assistant, lives in Ankara province)
“ Progression, for instance, modifies a number of your behaviours. Habits that you once enjoyed may now seem absurd, or you may now be able to appreciate activities you once considered impossible. In the end, man is a constantly evolving organism. Our beliefs, health, and mental state are all evolving. There are many factors that contribute to change. There are internal factors. All external factors have an effect. Even a person’s negative experiences influence every aspect of his life. At that time, I was hanging out with friends more, doing things like hiking and going out. Much rarer now. Maybe it’s because of age; it could be because everyone got married. We started to work. I don’t want it too much anymore; it’s more attractive to stay at home. My habits have changed. The pandemic has changed our habits a lot. ”

4. Discussion

4.1. leisure definition, 4.2. meaning of leisure, 4.3. flow of the life.

“ Habits that you once enjoyed may now seem absurd, or you may now be able to appreciate activities you once considered impossible. In the end, man is a constantly evolving organism ” (Erhan)

4.4. Facilitators–Barriers

“ I am a nurse. I want to learn the language by myself. I study German in my spare time, usually. It takes most of my time. I want to work abroad and practise my profession there. I have a purpose.” (Erhan)

4.5. Recovery and Well-Being

4.6. occupational injustice, 5. conclusions, 6. implications, author contributions, institutional review board statement, informed consent statement, data availability statement, acknowledgments, conflicts of interest.

  • American Occupational Therapy Association. Occupational Therapy Practice Framework: Domain et Process ; American Occupational Therapy Association: Bethesda, MD, USA, 2020. [ Google Scholar ]
  • Primeau, L.A. Play and Leisure. In Willard & Spackman’s Occupational Therapy , 10th ed.; Willard, H.S., Cohn, E.S., Boyt Schell, B.A., Spackman, C.S., Blesedell Crepeau, E., Eds.; Lippincott Williams and Wilkins: Philadelphia, PA, USA, 2003; pp. 633–658. ISBN 978-0781727983. [ Google Scholar ]
  • Raz-Yurovich, L. Leisure: Definitions, Trends, and Policy Implications. Popul. Res. Policy Rev. 2022 , 41 , 981–1019. [ Google Scholar ] [ CrossRef ]
  • Newman, D.B.; Tay, L.; Diener, E. Leisure and Subjective Well-Being: A Model of Psychological Mechanisms as Mediating Factors. J. Happiness Stud. 2014 , 15 , 555–578. [ Google Scholar ] [ CrossRef ]
  • Mansfield, L. Leisure and Health—Critical Commentary. Ann. Leis. Res. 2021 , 24 , 283–294. [ Google Scholar ] [ CrossRef ]
  • Law, M.; Polatajko, H.; Baptiste, S.; Townsend, E. Core Concepts of Occupational Therapy. Enabling Occup. Occup. Ther. Perspect. 1997 , 29 , 56. [ Google Scholar ]
  • Oliver, K.G.; Collin, P.; Burns, J.; Nicholas, J. Building Resilience in Young People through Meaningful Participation. Aust. E-J. Adv. Ment. Health 2006 , 5 , 34–40. [ Google Scholar ] [ CrossRef ]
  • Jenq, S.-N.; Chen, M.-R. Research on Leisure Needs, Leisure Participation and Leisure Obstacles of Family Caregivers with Intellectual Disabilities. Open Access Libr. J. 2021 , 8 , 1–22. [ Google Scholar ] [ CrossRef ]
  • Nakamura, J.; Csikzentmihalyi, M. The Construction of Meaning through Vital Engagement. In Flourishing: Positive Psychology and the Life Well-Lived ; American Psychological Association: Washington, DC, USA, 2003; pp. 83–104. [ Google Scholar ] [ CrossRef ]
  • Iwasaki, Y. Leisure and meaning-making: The pursuit of a meaningful life through leisure. In Global Leisure and the Struggle for a Better World ; Beniwal, A., Jain, R., Spracklen, K., Eds.; Palgrave Macmillan: Basingstoke, UK, 2018. [ Google Scholar ]
  • Porter, H.; Iwasaki, Y.; Shank, J. Conceptualizing Meaning-Making through Leisure Experiences. Loisir Société/Soc. Leis. 2010 , 33 , 167–194. [ Google Scholar ] [ CrossRef ]
  • Iwasaki, Y. Pathways to Meaning-Making Through Leisure-Like Pursuits in Global Contexts. J. Leis. Res. 2008 , 40 , 231–249. [ Google Scholar ] [ CrossRef ]
  • Iwasaki, Y.; Messina, E.; Shank, J.; Coyle, C. Role of Leisure in Meaning-Making for Community-Dwelling Adults with Mental Illness. J. Leis. Res. 2015 , 47 , 538–555. [ Google Scholar ] [ CrossRef ]
  • Sonnentag, S.; Cheng, B.H.; Parker, S.L. Recovery from Work: Advancing the Field Toward the Future. Annu. Rev. Organ. Psychol. Organ. Behav. 2022 , 9 , 33–60. [ Google Scholar ] [ CrossRef ]
  • Imms, C.; Adair, B.; Keen, D.; Ullenhag, A.; Rosenbaum, P.; Granlund, M. ‘Participation’: A Systematic Review of Language, Definitions, and Constructs Used in Intervention Research with Children with Disabilities. Dev. Med. Child Neurol. 2016 , 58 , 29–38. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Chun, S.; Heo, J.; Ryu, J. Leisure Participation, Physical Health, and Mental Health in Older Adults. Act. Adapt. Aging 2023 , 47 , 195–213. [ Google Scholar ] [ CrossRef ]
  • Kim, J.; Yamada, N.; Heo, J.; Han, A. Health Benefits of Serious Involvement in Leisure Activities among Older Korean Adults. Int. J. Qual. Stud. Health Well-Being 2014 , 9 , 24616. [ Google Scholar ] [ CrossRef ]
  • Elsden, E.; Bu, F.; Fancourt, D.; Mak, H.W. Frequency of Leisure Activity Engagement and Health Functioning over a 4-Year Period: A Population-Based Study amongst Middle-Aged Adults. BMC Public Health 2022 , 22 , 1275. [ Google Scholar ] [ CrossRef ]
  • Li, J.; Hsu, C.-C.; Lin, C.-T. Leisure Participation Behavior and Psychological Well-Being of Elderly Adults: An Empirical Study of Tai Chi Chuan in China. Int. J. Environ. Res. Public Health 2019 , 16 , 3387. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Argan, M.; Tokay Argan, M.; Dursun, M. Examining Relationships Among Well-Being, Leisure Satisfaction, Life Satisfaction, and Happiness. Int. J. Med. Res. Health Sci. 2018 , 7 , 49–59. [ Google Scholar ]
  • Mansfield, L.; Daykin, N.; Kay, T. Leisure and Wellbeing. Leis. Stud. 2020 , 39 , 1–10. [ Google Scholar ] [ CrossRef ]
  • Morse, K.F.; Fine, P.A.; Friedlander, K.J. Creativity and Leisure During COVID-19: Examining the Relationship Between Leisure Activities, Motivations, and Psychological Well-Being. Front. Psychol. 2021 , 12 , 609967. [ Google Scholar ] [ CrossRef ]
  • Garmabi, K.M.; Rezaee, M.; Pashmdarfard, M. Factors Associated with Participation of Community-Dwelling Older Adults in Activities Related to Leisure and Social Participation: A Systematic Review. Middle East J. Rehabil. Health Stud. 2023 , 10 , e131146. [ Google Scholar ] [ CrossRef ]
  • Sardina, A.; Tan, S.C.; Gamaldo, A. Leisure Barriers Among Older Adults in Low-Income Housing: Demographic, Health, and Contextual Correlates. Innov. Aging 2020 , 4 (Suppl. S1), 577. [ Google Scholar ] [ CrossRef ]
  • Hassett, L.; Shields, N.; Cole, J.; Owen, K.; Sherrington, C. Comparisons of Leisure-Time Physical Activity Participation by Adults with and without a Disability: Results of an Australian Cross-Sectional National Survey. BMJ Open Sport Exerc. Med. 2021 , 7 , e000991. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cantor, P.; Polakowska, M.; Proietti, A.; Tran, V.; Lebire, J.; Roy, L. Leisure Possibilities of Adults Experiencing Poverty: A Community-Based Participatory Study. Can. J. Occup. Ther. 2022 , 89 , 103–114. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Csikszentmihalyi, M. Flow: The Psychology of Optimal Experience ; Harper and Row: New York, NY, USA, 1990. [ Google Scholar ]
  • Stebbins, R.A. Challenging Mountain Nature: Risk, Motive, and Lifestyle in Three Hobbyist Sports ; Detselig Enterprises Ltd.: Calgary, AB, Canada, 2005. [ Google Scholar ]
  • Tse, D.C.K.; Nakamura, J.; Csikszentmihalyi, M. Beyond Challenge-Seeking and Skill-Building: Toward the Lifespan Developmental Perspective on Flow Theory. J. Posit. Psychol. 2020 , 15 , 171–182. [ Google Scholar ] [ CrossRef ]
  • Hurd, A.; Anderson, D.M.; Mainieri, T. Kraus’ Recreation and Leisure in Modern Society ; Jones & Bartlett Learning: Burlington, USA, 2021. [ Google Scholar ]
  • Tse, D.C.K.; Nakamura, J.; Csikszentmihalyi, M. Flow Experiences Across Adulthood: Preliminary Findings on the Continuity Hypothesis. J. Happiness Stud. 2022 , 23 , 2517–2540. [ Google Scholar ] [ CrossRef ]
  • Sivan, A.; Veal, A.J. Leisure and Human Rights: The World Leisure Organization Charter for Leisure: Past, Present and Future. World Leis. J. 2021 , 63 , 133–140. [ Google Scholar ] [ CrossRef ]
  • Universal Declaration of Human Rights at 70: 30 Articles on 30 Articles—Article 24. OHCHR. Available online: https://www.ohchr.org/en/press-releases/2018/12/universal-declaration-human-rights-70-30-articles-30-articles-article-24 (accessed on 4 February 2024).
  • Wilcock, A.A.; Hocking, C. An Occupational Perspective of Health , 3rd ed.; Slack: Thorofare, NJ, USA, 2015. [ Google Scholar ]
  • Hammell, K.W. Reflections on … Well-Being and Occupational Rights. Can. J. Occup. Ther. 2008 , 75 , 61–64. [ Google Scholar ] [ CrossRef ]
  • Durocher, E.; Gibson, B.E.; Rappolt, S. Occupational Justice: A Conceptual Review. J. Occup. Sci. 2014 , 21 , 418–430. [ Google Scholar ] [ CrossRef ]
  • Hosseini, S.M.S.; Sarhady, M.; Nurani Gharaborghe, S.; Mazdeh, M. Leisure of People With Multiple Sclerosis: A Content Analysis. Iran. Rehabil. J. 2017 , 15 , 23–30. [ Google Scholar ] [ CrossRef ]
  • Edwards, L.; Owen-Booth, B. An Exploration of Engagement in Community Based Creative Activities as an Occupation for Older Adults. Ir. J. Occup. Ther. 2021 , 49 , 51–57. [ Google Scholar ] [ CrossRef ]
  • Berglund, M.; Westin, L.; Svanström, R.; Sundler, A. Suffering Caused by Care—Patients’ Experiences from Hospital Settings. Int. J. Qual. Stud. Health Well-Being 2012 , 7 , 18688. [ Google Scholar ] [ CrossRef ]
  • Lopez, K.A.; Willis, D.G. Descriptive Versus Interpretive Phenomenology: Their Contributions to Nursing Knowledge. Qual Health Res. 2004 , 14 , 726–735. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Tuffour, I. A Critical Overview of Interpretative Phenomenological Analysis: A Contemporary Qualitative Research Approach. J. Healthc. Commun. 2017 , 2 , 52. [ Google Scholar ] [ CrossRef ]
  • Peat, G.; Rodriguez, A.; Smith, J. Interpretive Phenomenological Analysis Applied to Healthcare Research. Evid. Based Nurs. 2019 , 22 , 7–9. [ Google Scholar ] [ CrossRef ]
  • Collingridge, D.S.; Gantt, E.E. The Quality of Qualitative Research. Am. J. Med. Qual. 2019 , 34 , 439. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Valerio, M.A.; Rodriguez, N.; Winkler, P.; Lopez, J.; Dennison, M.; Liang, Y.; Turner, B.J. Comparing Two Sampling Methods to Engage Hard-to-Reach Communities in Research Priority Setting. BMC Med. Res. Methodol. 2016 , 16 , 146. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Leighton, K.; Kardong-Edgren, S.; Schneidereith, T.; Foisy-Doll, C. Using Social Media and Snowball Sampling as an Alternative Recruitment Strategy for Research. Clin. Simul. Nurs. 2021 , 55 , 37–42. [ Google Scholar ] [ CrossRef ]
  • Zickar, M.J.; Keith, M.G. Innovations in Sampling: Improving the Appropriateness and Quality of Samples in Organizational Research. Annu. Rev. Organ. Psychol. Organ. Behav. 2023 , 10 , 315–337. [ Google Scholar ] [ CrossRef ]
  • Alase, A. The Interpretative Phenomenological Analysis (IPA): A Guide to a Good Qualitative Research Approach. Int. J. Educ. Lit. Stud. 2017 , 5 , 9–19. [ Google Scholar ] [ CrossRef ]
  • Nehls, K.; Smith, B.D.; Schneider, H.A. Video-Conferencing Interviews in Qualitative Research. In Enhancing Qualitative and Mixed Methods Research with Technology ; IGI Global: Hershey, PA, USA, 2015; pp. 140–157. [ Google Scholar ] [ CrossRef ]
  • Azungah, T. Qualitative Research: Deductive and Inductive Approaches to Data Analysis. Qual. Res. J. 2018 , 18 , 383–400. [ Google Scholar ] [ CrossRef ]
  • Richards, L. Writing a Qualitative Thesis or Grant Application. In So Where’s YourResearch Profile? A Resource Book for Academics ; Beattie, K., Ed.; Union of Australian College Academics: South Melbourne, Australia, 1993. [ Google Scholar ]
  • Perry, C.; Jensen, O. Approaches to combining induction and deduction in one research study In Proceedings of the Conference of the Australian and New Zealand Marketing Academy, Auckland, New Zealand, 1–5 December 2001.
  • Demirbaş, G. Locating Leisure as the Route to Well-Being: Challenges of Researching Women’s Leisure in Turkey. In Leisure and Wellbeing ; Mansfield, L., Naykin, D., Eds.; Routledge: New York, NY, USA, 2022. [ Google Scholar ]
  • Pindek, S.; Shen, W.; Andel, S. Finally, Some “Me Time”: A New Theoretical Perspective on the Benefits of Commuting. Organ. Psychol. Rev. 2023 , 13 , 44–66. [ Google Scholar ] [ CrossRef ]
  • Rowe, D. Complexity and the Leisure Complex. Ann. Leis. Res. 2016 , 19 , 1–6. [ Google Scholar ] [ CrossRef ]
  • Hutteman, R.; Hennecke, M.; Orth, U.; Reitz, A.K.; Specht, J. Developmental Tasks as a Framework to Study Personality Development in Adulthood and Old Age. Eur. J. Pers. 2014 , 28 , 267–278. [ Google Scholar ] [ CrossRef ]
  • Aktu, Y. Levinson’un kuramında ilk yetişkinlik döneminin yaşam yapısı. Psikiyatr. Güncel Yaklaşımlar 2016 , 8 , 162–177. [ Google Scholar ] [ CrossRef ]
  • Kim, C.; Cho, Y. Working Conditions and Leisure-Time Physical Activity among Waged Workers in South Korea: A Cross-Sectional Study. J. Occup. Health 2015 , 57 , 259–267. [ Google Scholar ] [ CrossRef ]
  • Gürbüz, B.; Henderson, K.A. Leisure Activity Preferences and Constraints: Perspectives from Turkey. World Leis. J. 2014 , 56 , 300–316. [ Google Scholar ] [ CrossRef ]
  • Ayotte, B.J.; Margrett, J.A.; Hicks-Patrick, J. Physical Activity in Middle-Aged and Young-Old Adults: The Roles of Self-Efficacy, Barriers, Outcome Expectancies, Self-Regulatory Behaviors and Social Support. J. Health Psychol. 2010 , 15 , 173–185. [ Google Scholar ] [ CrossRef ]
  • Bauman, A.E.; Reis, R.S.; Sallis, J.F.; Wells, J.C.; Loos, R.J.; Martin, B.W. Correlates of Physical Activity: Why Are Some People Physically Active and Others Not? Lancet 2012 , 380 , 258–271. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Scarapicchia, T.M.F.; Amireault, S.; Faulkner, G.; Sabiston, C.M. Social Support and Physical Activity Participation among Healthy Adults: A Systematic Review of Prospective Studies. Int. Rev. Sport Exerc. Psychol. 2017 , 10 , 50–83. [ Google Scholar ] [ CrossRef ]
  • White, C.M.; Hutchinson, S.; Gallant, K.; Hamilton-Hinch, B. Beyond the Barriers: First-Voice Perspectives on Facilitators of Leisure Participation. Can. J. Community Ment. Health 2020 , 39 , 119–131. [ Google Scholar ] [ CrossRef ]
  • Kurumsal. 5 December 2022. Consumer Price Index, November 2022. Available online: https://data.tuik.gov.tr/Bulten/Index?p=Tuketici-Fiyat-Endeksi-Kasim-2022-45800 (accessed on 5 February 2024).
  • Mathes, E.W. Maslow’s Hierarchy of Needs as a Guide for Living. J. Humanist. Psychol. 1981 , 21 , 69–72. [ Google Scholar ] [ CrossRef ]
  • Pawsey, H.; Cramer, K.M.; DeBlock, D. Life Satisfaction and Maslow’s Hierarchy of Needs: An International Analysis of the World Values Survey. Int. J. Happiness Dev. 2023 , 8 , 66–79. [ Google Scholar ] [ CrossRef ]
  • Arora, N.; Robinson, K.; Charm, T.; Grimmelt, A.; Ortega, M.; Staack, Y.; Whitehead, S.; Yamakawa, N. Consumer Sentiment and Behavior Continue to Reflect the Uncertainty of the COVID-19 Crisis ; McKinsey Company: New York City, NY, USA, 2020; Available online: https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/a-global-view-of-how-consumer-behavior-is-changing-amid-covid-19 (accessed on 5 February 2024).
  • Living Standards, Poverty and İnequality in the UK: 2023; Institute for Fiscal Studies: London, UK. Available online: https://ifs.org.uk/publications/living-standards-poverty-and-inequality-uk-2023 (accessed on 5 February 2024).
  • Freund, A.M. The Bucket List Effect: Why Leisure Goals Are Often Deferred until Retirement. Am. Psychol. 2020 , 75 , 499–510. [ Google Scholar ] [ CrossRef ]
  • Kuykendall, L.; Lei, X.; Zhu, Z.; Hu, X. Leisure Choices and Employee Well-Being: Comparing Need Fulfillment and Well-Being during TV and Other Leisure Activities. Appl. Psychol. Health Well-Being 2020 , 12 , 532–558. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Iso-Ahola, S.E.; Baumeister, R.F. Leisure and Meaning in Life. Front. Psychol. 2023 , 14 , 1074649. [ Google Scholar ] [ CrossRef ]
  • Denovan, A.; Macaskill, A. Building Resilience to Stress through Leisure Activities: A Qualitative Analysis. Ann. Leis. Res. 2017 , 20 , 446–466. [ Google Scholar ] [ CrossRef ]
  • Takiguchi, Y.; Matsui, M.; Kikutani, M.; Ebina, K. The Relationship between Leisure Activities and Mental Health: The Impact of Resilience and COVID-19. Appl. Psychol. Health Well-Being 2023 , 15 , 133–151. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sonnentag, S.; Fritz, C. The Recovery Experience Questionnaire: Development and Validation of a Measure for Assessing Recuperation and Unwinding from Work. J. Occup. Health Psychol. 2007 , 12 , 204–221. [ Google Scholar ] [ CrossRef ]
  • Lu, L.; Argyle, M. Leisure Satisfaction and Happiness as a Function of Leisure Activity. Gaoxiong Yi Xue Ke Xue Za Zhi 1994 , 10 , 89–96. [ Google Scholar ]
  • Mitas, O.; Qian, X.L.; Yarnal, C.; Kerstetter, D. “The Fun Begins Now!”: Broadening and Building Processes in Red Hat Society ® Participation. J. Leis. Res. 2011 , 43 , 30–55. [ Google Scholar ] [ CrossRef ]
  • Huppert, F.A.; Baylis, N.; Keverne, B.; Fredrickson, B.L. The Broaden–and–Build Theory of Positive Emotions. Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci. 2004 , 359 , 1367–1377. [ Google Scholar ] [ CrossRef ]
  • Wegner, L.; Stirrup, S.; Desai, H.; de Jongh, J.-C. “This Pandemic Has Changed Our Daily Living”: Young Adults’ Leisure Experiences during the COVID-19 Pandemic in South Africa. J. Occup. Sci. 2022 , 29 , 323–335. [ Google Scholar ] [ CrossRef ]
  • Chen, S.-T.; Hyun, J.; Graefe, A.R.; Mowen, A.J.; Almeida, D.M.; Sliwinski, M.J. The Influence of Leisure Engagement on Daily Emotional Well-Being. Leis. Sci. 2022 , 44 , 995–1012. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Olafsen, A.H.; Bentzen, M. Benefits of Psychological Detachment From Work: Does Autonomous Work Motivation Play a Role? Front. Psychol. 2020 , 11 , 527448. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kelly, C.M.; Strauss, K.; Arnold, J.; Stride, C. The Relationship between Leisure Activities and Psychological Resources That Support a Sustainable Career: The Role of Leisure Seriousness and Work-Leisure Similarity. J. Vocat. Behav. 2020 , 117 , 103340. [ Google Scholar ] [ CrossRef ]
  • Darawsheh, W.B. Exploration of Occupational Deprivation Among Syrian Refugees Displaced in Jordan. Am. J. Occup. Ther. 2019 , 73 , 7304205030p1–7304205030p9. [ Google Scholar ] [ CrossRef ]
  • Benjamin-Thomas, T.E.; Rudman, D.L.; McGrath, C.; Cameron, D.; Abraham, V.J.; Gunaseelan, J.; Vinothkumar, S.P. Situating Occupational Injustices Experienced by Children with Disabilities in Rural India within Sociocultural, Economic, and Systemic Conditions. J. Occup. Sci. 2022 , 29 , 97–114. [ Google Scholar ] [ CrossRef ]
  • Whiteford, G.; Jones, K.; Rahal, C.; Suleman, A. The Participatory Occupational Justice Framework as a Tool for Change: Three Contrasting Case Narratives. J. Occup. Sci. 2018 , 25 , 497–508. [ Google Scholar ] [ CrossRef ]
  • Ammann, B.; Satink, T.; Andresen, M. Experiencing Occupations with Chronic Hand Disability: Narratives of Hand-Injured Adults. Hand Ther. 2012 , 17 , 87–94. [ Google Scholar ] [ CrossRef ]
  • Crabtree, J.L.; Wall, J.M.; Ohm, D. Critical Reflections on Participatory Action Research in a Prison Setting: Toward Occupational Justice. OTJR Occup. Ther. J. Res. 2016 , 36 , 244–252. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Semi-Structured Interview QuestionsDimensions of Leisure Occupational profile [ ].
Is the concept of free time more suitable for you than the leisure concept?Sense-making, personal meaning
1. What does leisure mean to you?Subjective experience and personal meaning
2. What comes to mind when you think of leisure?Subjective experience and personal meaning
3. What do you do in your leisure time?
4. What did you do in the past? Are you still continuing?
5. What would you like to do in the future?
Activity preferences, temporal dimension
6. What times do you do it? (Summer, winter, seasonal features?)Temporal dimension
7. For how long and how often do you do it?Activity preferences
8. With whom would you prefer to do it?Activity preferences
9. What does participate in leisure mean to you?Subjective experience, and satisfaction from experience
10. How does it make you feel to participate in leisure time?
How do you feel when you can’t attend?
Subjective experience, and satisfaction from experience
11. What motivates you to do so?
How do you feel when you can’t attend?
Subjective experience, and satisfaction from experience
12. Can you do your leisure activities the way you want?Environmental context and activity contexts
Subjective experiences, satisfaction from the experience
13. Are there any cases where your leisure is affected?
If yes, what are the influencing factors?
What are the barriers?
What are facilitators?
Subjective experiences, environmental factors, and activity contexts
n
GenderFemale14
Male14
Age Min25
Max50
Average age34
Education2 years college6
4 years university (Bachelor’s degree)15
6 years university (Medicine)1
Master’ degree5
PhD1
Living (with)Single5
Homemate/s2
Spouse/partner6
Parents2
Family with kids13
Monthly Income1x or less3
1x–1.5x5
2x–3x12
3x–4x4
4x and above4
Working styleFull-time24
Part-time4
Night shifts5
Hybrid
(Home office and in office)
3
Home office2
Working hours (weekly)0–20 h3
20–30 h5
40 h7
40–50 h9
50–60 h1
60–70 h2
70 h (with shifts)1
n = 28, x = Minimum wage in Turkey
Freedom
Leisure instead of free time
Me time
Relaxation
Mastery
Detachment
Working conditions
Financial resources
Accessibility
Roles and responsibilities
Social support systems
Opportunities
Positive emotions
Satisfaction
Resilience
Negative emotions related to a lack of participation
Occupational disruption
Occupational deprivation
Occupational alienation
Occupational imbalance
Activity preferences
Experiences
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Sezer, K.S.; Aki, E. “ It Is as if I Gave a Gift to Myself ”: A Qualitative Phenomenological Study on Working Adults’ Leisure Meaning, Experiences, and Participation. Behav. Sci. 2024 , 14 , 833. https://doi.org/10.3390/bs14090833

Sezer KS, Aki E. “ It Is as if I Gave a Gift to Myself ”: A Qualitative Phenomenological Study on Working Adults’ Leisure Meaning, Experiences, and Participation. Behavioral Sciences . 2024; 14(9):833. https://doi.org/10.3390/bs14090833

Sezer, Kubra Sahadet, and Esra Aki. 2024. "“ It Is as if I Gave a Gift to Myself ”: A Qualitative Phenomenological Study on Working Adults’ Leisure Meaning, Experiences, and Participation" Behavioral Sciences 14, no. 9: 833. https://doi.org/10.3390/bs14090833

Article Metrics

Further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Advertisement

Advertisement

Reliability-Based Design for Strip-Footing Subjected to Inclined Loading Using Hybrid LSSVM ML Models

  • Original Paper
  • Published: 17 September 2024

Cite this article

internal reliability in qualitative research design refers to

  • Manish Kumar 1 ,
  • Divesh Ranjan Kumar 2 &
  • Warit Wipulanusat   ORCID: orcid.org/0000-0003-1006-6540 2  

The bearing capacity of strip footings is significantly influenced by uncertainties related to the footing, soil conditions, and load inclination. Given the inherent unpredictability in footing design, the reliability-based design of geotechnical structures has garnered considerable interest in the research community. This paper presents a state-of-the-art probabilistic design for footings under inclined loading using the first-order reliability method (FORM) combined with a hybrid least squares support vector machine (LSSVM) learning approach. A comprehensive dataset comprising 920 samples from the literature, with the reduction factor (RF) as the output parameter, was utilized to simulate hybrid LSSVM models based on particle swarm optimization (PSO) and Harris hawks optimization (HHO). The input variables for predicting the bearing capacity include the load eccentricity-to-width ratio, embedment ratio, load inclination-to-friction angle, and load arrangement. The performance metrics indicate that among the three proposed machine learning models, the LSSVM-PSO model achieves the best predictive performance, with an R 2 of 0.991 and an RMSE of 0.051 during training and an R 2 of 0.962 and an RMSE of 0.109 during testing. The model’s performance was further evaluated via rank analysis, reliability analysis, regression plots, and uncertainty analysis. The reliability index (β) and corresponding probability of failure (POF) computed using FORM were compared with the actual values for both phases. The study concluded that the LSSVM-PSO is the most reliable method for reliability-based design, demonstrating superior performance and reliability. This hybrid approach offers a robust framework for addressing uncertainties in geotechnical engineering, enhancing the reliability and accuracy of footing design under inclined loading conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

internal reliability in qualitative research design refers to

Explore related subjects

  • Artificial Intelligence

Data and Materials Availability

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Acharyya R (2019) Finite element investigation and ANN-based prediction of the bearing capacity of strip footings resting on sloping ground. Int J Geo-Eng 10:1–19. https://doi.org/10.1186/S40703-019-0100-Z/FIGURES/19

Google Scholar  

Alabool HM, Alarabiat D, Abualigah L, Heidari AA (2021) Harris hawks optimization: a comprehensive review of recent variants and applications. Neural Comput Appl 33:8939–8980. https://doi.org/10.1007/s00521-021-05720-5

Babu GLS, Srivastava A, Murthy DS (2011) Reliability analysis of the bearing capacity of a shallow foundation resting on cohesive soil. Can Geotech J 43:217–223. https://doi.org/10.1139/T05-099

Behera RN, Patra C (2018) Ultimate bearing capacity prediction of eccentrically inclined loaded strip footings. Geotech Geol Eng 36:3029–3080. https://doi.org/10.1007/S10706-018-0521-Z/METRICS

Behera RN, Patra CR, Sivakugan N, Das BM (2013a) Prediction of ultimate bearing capacity of eccentrically inclined loaded strip footing by ANN: Part II. Int J Geotech Eng 7:165–172. https://doi.org/10.1179/1938636213Z.00000000019

Behera RN, Patra CR, Sivakugan N, Das BM (2013b) Prediction of ultimate bearing capacity of eccentrically inclined loaded strip footing by ANN, part I. Int J Geotech Eng 7:36–44. https://doi.org/10.1179/1938636212Z.00000000012

Biswas R, Rai B, Samui P, Roy SS (2020) Estimating concrete compressive strength using MARS, LSSVM and GP. Eng J 24:41–52. https://doi.org/10.4186/ej.2020.24.2.41

Bozozuk M (1981) Bearing capacity of pile preloaded by downdrag. In: 10th international conference on soil mechanics and foundation engineering, Stockholm, 15–19 June 1981, pp 631–636

Coyle HM, Bartoskewitz RE, Berger WJ (1973) Bearing capacity prediction by wave equation analysis--state of the art

Fatolahzadeh S, Mehdizadeh R (2021) Reliability assessment of shallow foundation stability under eccentric load using Monte Carlo and first order second moment method. Geotech Geol Eng 39:5651–5664. https://doi.org/10.1007/s10706-021-01852-6

Fletcher R (1987) Practical methods of optimization. John Wiley & Sons. New York, 80(4), 38

Grimaldi EA, Grimaccia F, Mussetta M, Zich RE (2004) PSO as an effective learning algorithm for neural network applications. ICCEA 2004 - 2004 3rd International Conference on Computational Electromagnetics and its Applications, Proceedings 557–560. https://doi.org/10.1109/ICCEA.2004.1459416

Hansen JB (1961) A general formula for bearing capacity. Danish Geotechnical Institute

Hansen JB (1970a) A Revised and extended formula for bearing capacity. Bulletin of the Danish Geotechnical Institute

Hansen JB (1970b) A revised and extended formula for bearing capacity

Harandizadeh H, JahedArmaghani D, Khari M (2019) A new development of ANFIS-GMDH optimized by PSO to predict pile bearing capacity based on experimental datasets. Eng Comput 1:3. https://doi.org/10.1007/s00366-019-00849-3

Harr M (1996) Reliability-based design in civil engineering. McGraw-Hill, New York, p 291

Heidari AA, Mirjalili S, Faris H et al (2019) Harris hawks optimization: algorithm and applications. Fut Gener Comput Syst 97:849–872. https://doi.org/10.1016/J.FUTURE.2019.02.028

Kardani N, Bardhan A, Roy B et al (2021) A novel improved Harris Hawks optimization algorithm coupled with ELM for predicting permeability of tight carbonates. Eng Comput. https://doi.org/10.1007/S00366-021-01466-9/TABLES/8

Keawsawasvong S, Thongchom C, Likitlersuang S (2021) Bearing capacity of strip footing on hoek-brown rock mass subjected to eccentric and inclined loading. Transp Infrastruct Geotechnol 8:189–202. https://doi.org/10.1007/S40515-020-00133-8/METRICS

Kennedy J, Eberhart R (1995) Particle swarm optimization. In: IEEE International Conference on Neural Networks - Conference Proceedings

Khatti J, Grover KS (2023) Prediction of compaction parameters of compacted soil using LSSVM, LSTM, LSBoostRF, and ANN. Innov Infrastruct Solut 8:1–34. https://doi.org/10.1007/S41062-023-01048-2/METRICS

Kisi O (2015) Streamflow forecasting and estimation using least square support vector regression and adaptive neuro-fuzzy embedded fuzzy c-means clustering. Water Resour Manage 29:5109–5127

Kisi O, Parmar KS (2016) Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution. J Hydrol (Amst) 534:104–112

Krabbenhoft S, Damkilde L, Krabbenhoft K (2014) Bearing capacity of strip footings in cohesionless soil subject to eccentric and inclined loads. Int J Geomech 14:04014003. https://doi.org/10.1061/(ASCE)GM.1943-5622.0000332

Kumar M, Samui P (2019) Reliability analysis of pile foundation using ELM and MARS. Geotech Geol Eng 37:3447–3457. https://doi.org/10.1007/s10706-018-00777-x

Kumar M, Samui P (2020) Reliability analysis of settlement of pile group in clay using LSSVM, GMDH, GPR. Geotech Geol Eng. https://doi.org/10.1007/s10706-020-01464-6

Kumar DR, Samui P, Wipulanusat W et al (2023) Machine learning approaches for prediction of the bearing capacity of ring foundations on rock masses. Earth Sci Inform 16:4153–4168. https://doi.org/10.1007/S12145-023-01152-Y/METRICS

Kumar DR, Wipulanusat W, Kumar M et al (2024) Optimized neural network-based state-of-the-art soft computing models for the bearing capacity of strip footings subjected to inclined loading. Intell Syst Appl 21:200314. https://doi.org/10.1016/j.iswa.2023.200314

Lai VQ, Sangjinda K, Keawsawasvong S et al (2022) A machine learning regression approach for predicting the bearing capacity of a strip footing on rock mass under inclined and eccentric load. Front Built Environ 8:962331. https://doi.org/10.3389/FBUIL.2022.962331/BIBTEX

Liu C, Niu P, Li G et al (2017) A hybrid heat rate forecasting model using optimized LSSVM based on improved GSA. Neural Process Lett 45:299–318. https://doi.org/10.1007/S11063-016-9523-0/METRICS

Loukidis D, Chakraborty T, Salgado R (2008a) Bearing capacity of strip footings on purely frictional soil under eccentric and inclined loads. Canad Geotech J 45:768–787. https://doi.org/10.1139/T08-015

Loukidis D, Chakraborty T, Salgado R (2008b) Bearing capacity of strip footings on purely frictional soil under eccentric and inclined loads. Canad Geotech J 45:768–787. https://doi.org/10.1139/T08-015

Meyerhof GG (1951) The ultimate bearing capacity of foudations. Geotechnique 2:301–332

Meyerhof GG (1963) Some Recent research on the bearing capacity of foundations. Canad Geotech J 1:16–26. https://doi.org/10.1139/T63-003

Meyerhof GG (1974) Ultimate Bearing Capacity of FOOtingS On Sand Layer Overlaying Clay CCIdin Geolech/icd. OL/7id 1:16–26

Moayedi H, Hayati S (2018) Modelling and optimization of ultimate bearing capacity of strip footing near a slope by soft computing methods. Appl Soft Comput 66:208–219. https://doi.org/10.1016/J.ASOC.2018.02.027

Moayedi H, Kalantar B, Dounis A et al (2019) Development of two novel hybrid prediction models estimating ultimate bearing capacity of the shallow circular footing. Appl Sci 9:4594. https://doi.org/10.3390/app9214594

Moayedi H, Raftari M, Sharifi A et al (2020) Optimization of ANFIS with GA and PSO estimating α ratio in driven piles. Eng Comput 36:227–238. https://doi.org/10.1007/S00366-018-00694-W/FIGURES/11

Moayedi H, Gör M, Kok Foong L, Bahiraei M (2021) Imperialist competitive algorithm hybridized with multilayer perceptron to predict the load-settlement of square footing on layered soils. Measurement 172:108837. https://doi.org/10.1016/J.MEASUREMENT.2020.108837

Momeni E, Dowlatshahi MB, Omidinasab F et al (2020) Gaussian process regression technique to estimate the pile bearing capacity. Arab J Sci Eng 45:8255–8267

Moreno-Salinas D, Chaos D, Besada-Portas E, et al (2013) Semiphysical modelling of the nonlinear dynamics of a surface craft with LS-SVM. Math Probl Eng 2013

Newcombe RG (1998) Two-sided confidence intervals for the single proportion: comparison of seven methods. Stat Med 17:857–872

Nguyen H, Moayedi H, Foong LK et al (2020) Optimizing ANN models with PSO for predicting short building seismic response. Eng Comput. https://doi.org/10.1007/s00366-019-00733-0

Ornek M (2014) Estimation of ultimate loads of eccentric-inclined loaded strip footings rested on sandy soils. Neural Comput Appl 25:39–54. https://doi.org/10.1007/S00521-013-1444-5/METRICS

Paik K, Salgado R (2003) Determination of bearing capacity of open-ended piles in sand. J Geotech Geoenviron Eng 129:46–57

Patra CR, Behara RN, Sivakugan N, Das BM (2013) Ultimate bearing capacity of shallow strip foundation under eccentrically inclined load Part II. Int J Geotech Eng 6:507–514. https://doi.org/10.3328/IJGE.2012.06.04.507-514

Pradeep T, GuhaRay A, Bardhan A et al (2022) Reliability and prediction of embedment depth of sheet pile walls using hybrid ANN with optimization techniques. Arab J Sci Eng 47:12853–12871. https://doi.org/10.1007/s13369-022-06607-w

Prandtl L (1920) Über die härte plastischer körper. Nachrichten Von der Gesellschaft der Wissenschaften Zu Göttingen, Mathematisch-Physikalische Klasse 1920:74–85

Rukhaiyar S, Alam MN, Samadhiya NK (2017) A PSO-ANN hybrid model for predicting factor of safety of slope. Int J Geotech Eng 12:1–11. https://doi.org/10.1080/19386362.2017.1305652

Sahu R, Patra CR, Sivakugan N, Das BM (2017) Use of ANN and neuro fuzzy model to predict bearing capacity factor of strip footing resting on reinforced sand and subjected to inclined loading. Int J Geosyn Ground Eng 3:1–15. https://doi.org/10.1007/S40891-017-0102-X/METRICS

Saleh NM, Alsaied AE, Elleboudy AM (2008) Performance of skirted strip footing subjected to eccentric inclined load. Electron J Geotech Eng 13(F):1–33

Sattar AMA (2014) Gene expression models for the prediction of longitudinal dispersion coefficients in transitional and turbulent pipe flow. J Pipeline Syst Eng Pract 5:4013011

Sivakumar Babu GL, Srivastava A (2007) Reliability analysis of allowable pressure on shallow foundation using response surface method. Comput Geotech 34:187–194. https://doi.org/10.1016/J.COMPGEO.2006.11.002

Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9:293–300. https://doi.org/10.1023/A:1018628609742

Suykens JAK, Vandewalle J (2000) Kisi, O. Streamflow forecasting and estimation using least square support vector r. IEEE Trans Circuits Syst I Fundam Theory Appl 47:1109–1114

Taiebat HA, Carter JP (2015) Bearing capacity of strip and circular foundations on undrained clay subjected to eccentric loads. Geotechnique 52:61–64. https://doi.org/10.1680/GEOT.2002.52.1.61

Terzaghi K (1943) Theoretical soil mechanics. John Wiley & Sons Inc

Terzaghi K Theoretical soil Mechanics, New York, 1943

Vesic AS (1973) Analysis of ultimate loads of shallow foundatiONS. ASCE J Soil Mech Found Div. https://doi.org/10.1061/jsfeaq.0001846

Xue X (2016) Prediction of Slope stability based on hybrid PSO and LSSVM. J Comput Civ Eng 31:04016041. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000607

Xue X, Chen X (2019a) Determination of ultimate bearing capacity of shallow foundations using LSSVM algorithm. J Civ Eng Manag 25:451–459. https://doi.org/10.3846/jcem.2019.9875

Zhang H, Nguyen H, Bui X-N et al (2022) A generalized artificial intelligence model for estimating the friction angle of clays in evaluating slope stability using a deep neural network and Harris Hawks optimization algorithm. Eng Comput 38:3901–3914. https://doi.org/10.1007/s00366-020-01272-9

Download references

Acknowledgements

This work was supported by the Thammasat University Research Unit in Data Science and Digital Transformation.

The authors have not disclosed any funding.

Author information

Authors and affiliations.

Department of Civil Engineering, SRM Institute of Science and Technology (SRMIST), Tiruchirappalli, 621105, India

Manish Kumar

Research Unit in Data Science and Digital Transformation, Department of Civil Engineering, Faculty of Engineering, Thammasat School of Engineering, Thammasat University, Pathumthani, Thailand

Divesh Ranjan Kumar & Warit Wipulanusat

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Warit Wipulanusat .

Ethics declarations

Conflict of interests.

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Kumar, M., Kumar, D.R. & Wipulanusat, W. Reliability-Based Design for Strip-Footing Subjected to Inclined Loading Using Hybrid LSSVM ML Models. Geotech Geol Eng (2024). https://doi.org/10.1007/s10706-024-02945-8

Download citation

Received : 01 July 2024

Accepted : 04 September 2024

Published : 17 September 2024

DOI : https://doi.org/10.1007/s10706-024-02945-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Strip footing
  • Reduction factor
  • Reliability index
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Internal reliability in qualitative research

    internal reliability in qualitative research design refers to

  2. Internal reliability in qualitative research

    internal reliability in qualitative research design refers to

  3. Internal reliability in qualitative research

    internal reliability in qualitative research design refers to

  4. Reliability And Validity In Qualitative Research Ppt Example Of Ppt

    internal reliability in qualitative research design refers to

  5. Validity and Reliability in Qualitative Research

    internal reliability in qualitative research design refers to

  6. What is Research Design in Qualitative Research

    internal reliability in qualitative research design refers to

VIDEO

  1. Qualitative Research Design Methodologies and their Critique

  2. Qualitative Research Designs

  3. Research Methodology: Philosophically Explained!

  4. Qualitative Research Design And Types

  5. Create Consistency with Effective Process Mapping #shorts

  6. Research Design & Approaches to Inquiry by Prof. Rajagopal

COMMENTS

  1. Validity, reliability, and generalizability in qualitative research

    Fundamental concepts of validity, reliability, and generalizability as applicable to qualitative research are then addressed with an update on the current views and controversies. Keywords: Controversies, generalizability, primary care research, qualitative research, reliability, validity. Source of Support: Nil.

  2. How is reliability and validity realized in qualitative research?

    Reliability in qualitative research refers to the stability of responses to multiple coders of data sets. It can be enhanced by detailed field notes by using recording devices and by transcribing the digital files. However, validity in qualitative research might have different terms than in quantitative research. Lincoln and Guba (1985) used "trustworthiness" of ...

  3. The 4 Types of Reliability in Research

    There are four main types of reliability. Each can be estimated by comparing different sets of results produced by the same method. Type of reliability. Measures the consistency of…. Test-retest. The same test over time. Interrater. The same test conducted by different people. Parallel forms.

  4. A Review of the Quality Indicators of Rigor in Qualitative Research

    Abstract. Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework ...

  5. (PDF) Validity and Reliability in Qualitative Research

    Validity and reliability or trustworthiness are fundamental issues in scientific research whether. it is qualitative, quantitative, or mixed research. It is a necessity for researchers to describe ...

  6. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  7. Internal, External, and Ecological Validity in Research Design, Conduct

    The concept of validity is also applied to research studies and their findings. Internal validity examines whether the study design, conduct, and analysis answer the research questions without bias. External validity examines whether the study findings can be generalized to other contexts. Ecological validity examines, specifically, whether the ...

  8. Verification Strategies for Establishing Reliability and Validity in

    The rejection of reliability and validity in qualitative inquiry in the 1980s has resulted in an interesting shift for "ensuring rigor" from the investigator's actions during the course of the research, to the reader or consumer of qualitative inquiry. ... making sure, and being certain. In qualitative research, verification refers to the ...

  9. Criteria for Good Qualitative Research: A Comprehensive Review

    This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then ...

  10. Issues of validity and reliability in qualitative research

    Although the tests and measures used to establish the validity and reliability of quantitative research cannot be applied to qualitative research, there are ongoing debates about whether terms such as validity, reliability and generalisability are appropriate to evaluate qualitative research.2-4 In the broadest context these terms are applicable, with validity referring to the integrity and ...

  11. Intercoder Reliability in Qualitative Research: Debates and Practical

    In appropriate research contexts, ICR assessment can improve both the internal quality and external reception of qualitative studies. Key benefits include improving the systematicity, communicability, and transparency of the coding process; promoting reflexivity and dialogue within research teams; and helping to satisfy diverse audiences of the ...

  12. Understanding Reliability and Validity in Qualitative Research

    To widen the spectrum of conceptualization of reliability and revealing the congruence of. reliability and validity in qualitative research, Lincoln and Guba (1985) states that: "Since there. can ...

  13. Redefining Qualitative Methods: Believability in the Fifth Moment

    Qualitative researchers can enhance reliability by ensuring research worker reliability, variations in observations, and the use of various data collection techniques such as the test-retest method and split-half method. These four methods of enhancing the reliability of qualitative research will be discussed.

  14. Contextualizing reliability and validity in qualitative research

    Issues of trustworthiness in qualitative leisure research, often demonstrated through particular techniques of reliability and/or validity, is often either nonexistent, unsubstantial, or unexplained.

  15. The pillars of trustworthiness in qualitative research

    Dear Editor, The global community of medical and nursing researchers has increasingly embraced qualitative research approaches. This surge is seen in their autonomous utilization or incorporation as essential elements within mixed-method research attempts [1].The growing trend is driven by the recognized additional benefits that qualitative approaches provide to the investigation process [2], [3].

  16. Quality in qualitative research: Through the lens of validity

    Issues of trustworthiness in qualitative leisure research, often demonstrated through particular techniques of reliability and/or validity, is often either nonexistent, unsubstantial, or unexplained.

  17. Series: Practical guidance to qualitative research. Part 4

    Introduction. This article is the fourth and last in a series of four articles aiming to provide practical guidance for qualitative research. In an introductory paper, we have described the objective, nature and outline of the series [].Part 2 of the series focused on context, research questions and design of qualitative research [], whereas Part 3 concerned sampling, data collection and ...

  18. Internal Validity vs. External Validity in Research

    Differences. The essential difference between internal validity and external validity is that internal validity refers to the structure of a study (and its variables) while external validity refers to the universality of the results. But there are further differences between the two as well. For instance, internal validity focuses on showing a ...

  19. Dimensions of Critical Care Nursing

    Common validity techniques in qualitative research refer to design consideration, data generation, analytic procedures, and presentation. 56 First is the design consideration. Developing a self-conscious design, the paradigm assumption, the purposeful choice of small sample of informants relevant to the study, and the use of inductive approach ...

  20. Validity and Reliability in Qualitative research

    In Quantitative research, reliability refers to consistency of certain measurements, and validity - to whether these measurements "measure what they are supposed to measure". Things are slightly different, however, in Qualitative research. Reliability in qualitative studies is mostly a matter of "being thorough, careful and honest in ...

  21. Validity and Reliability in Qualitative Research

    Reliability and validity are equally important to consider in qualitative research. Ways to enhance validity in qualitative research include: Building reliability can include one or more of the following: The most well-known measure of qualitative reliability in education research is inter-rater reliability and consensus coding.

  22. Strategies for Establishing Dependability between Two Qualitative

    Qualitative research design: An interactive approach. 3rd ed. Los Angeles: Sage. Google Scholar. Merriam S. B. 1998. Case studies as qualitative research. ... Rose J., Johnson C. W. 2020. Contextualizing reliability and validity in qualitative research: Toward more rigorous and trustworthy qualitative social science in leisure research. Journal ...

  23. CVS-Q teen: an adapted, reliable and validated tool to assess ...

    A mixed-method sequential design was used. First, a qualitative study was involved two nominal groups to assess the instrument's acceptability. ... Adequate internal consistency (person ...

  24. Internal reliability in qualitative research design refers to

    Inter-rate reliability can be used for interviews. It can also be called inter-observer reliability when referring to observational research. Here researcher when observe the same behavior independently (to avoided bias) and compare their data. If the data is similar then it is reliable.

  25. "It Is as if I Gave a Gift to Myself": A Qualitative Phenomenological

    Leisure participation is a fundamental human and occupational right throughout life for working people, particularly in adulthood. A total of 28 working adults representing diverse regions of Turkey, from middle-class backgrounds, aged between 25 and 50, and without any known health conditions, were interviewed to gain insights into their leisure participation during the period September 2021 ...

  26. Reliability-Based Design for Strip-Footing Subjected to Inclined

    The bearing capacity of strip footings is significantly influenced by uncertainties related to the footing, soil conditions, and load inclination. Given the inherent unpredictability in footing design, the reliability-based design of geotechnical structures has garnered considerable interest in the research community. This paper presents a state-of-the-art probabilistic design for footings ...