• Privacy Policy

Research Method

Home » Evaluating Research – Process, Examples and Methods

Evaluating Research – Process, Examples and Methods

Table of Contents

Evaluating Research

Evaluating Research

Definition:

Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and involves critical thinking, analysis, and interpretation of the research findings.

Research Evaluating Process

The process of evaluating research typically involves the following steps:

Identify the Research Question

The first step in evaluating research is to identify the research question or problem that the study is addressing. This will help you to determine whether the study is relevant to your needs.

Assess the Study Design

The study design refers to the methodology used to conduct the research. You should assess whether the study design is appropriate for the research question and whether it is likely to produce reliable and valid results.

Evaluate the Sample

The sample refers to the group of participants or subjects who are included in the study. You should evaluate whether the sample size is adequate and whether the participants are representative of the population under study.

Review the Data Collection Methods

You should review the data collection methods used in the study to ensure that they are valid and reliable. This includes assessing the measures used to collect data and the procedures used to collect data.

Examine the Statistical Analysis

Statistical analysis refers to the methods used to analyze the data. You should examine whether the statistical analysis is appropriate for the research question and whether it is likely to produce valid and reliable results.

Assess the Conclusions

You should evaluate whether the data support the conclusions drawn from the study and whether they are relevant to the research question.

Consider the Limitations

Finally, you should consider the limitations of the study, including any potential biases or confounding factors that may have influenced the results.

Evaluating Research Methods

Evaluating Research Methods are as follows:

  • Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field.
  • Critical appraisal : Critical appraisal involves systematically evaluating a study based on specific criteria. This helps assess the quality of the study and the reliability of the findings.
  • Replication : Replication involves repeating a study to test the validity and reliability of the findings. This can help identify any errors or biases in the original study.
  • Meta-analysis : Meta-analysis is a statistical method that combines the results of multiple studies to provide a more comprehensive understanding of a particular topic. This can help identify patterns or inconsistencies across studies.
  • Consultation with experts : Consulting with experts in the field can provide valuable insights into the quality and relevance of a study. Experts can also help identify potential limitations or biases in the study.
  • Review of funding sources: Examining the funding sources of a study can help identify any potential conflicts of interest or biases that may have influenced the study design or interpretation of results.

Example of Evaluating Research

Example of Evaluating Research sample for students:

Title of the Study: The Effects of Social Media Use on Mental Health among College Students

Sample Size: 500 college students

Sampling Technique : Convenience sampling

  • Sample Size: The sample size of 500 college students is a moderate sample size, which could be considered representative of the college student population. However, it would be more representative if the sample size was larger, or if a random sampling technique was used.
  • Sampling Technique : Convenience sampling is a non-probability sampling technique, which means that the sample may not be representative of the population. This technique may introduce bias into the study since the participants are self-selected and may not be representative of the entire college student population. Therefore, the results of this study may not be generalizable to other populations.
  • Participant Characteristics: The study does not provide any information about the demographic characteristics of the participants, such as age, gender, race, or socioeconomic status. This information is important because social media use and mental health may vary among different demographic groups.
  • Data Collection Method: The study used a self-administered survey to collect data. Self-administered surveys may be subject to response bias and may not accurately reflect participants’ actual behaviors and experiences.
  • Data Analysis: The study used descriptive statistics and regression analysis to analyze the data. Descriptive statistics provide a summary of the data, while regression analysis is used to examine the relationship between two or more variables. However, the study did not provide information about the statistical significance of the results or the effect sizes.

Overall, while the study provides some insights into the relationship between social media use and mental health among college students, the use of a convenience sampling technique and the lack of information about participant characteristics limit the generalizability of the findings. In addition, the use of self-administered surveys may introduce bias into the study, and the lack of information about the statistical significance of the results limits the interpretation of the findings.

Note*: Above mentioned example is just a sample for students. Do not copy and paste directly into your assignment. Kindly do your own research for academic purposes.

Applications of Evaluating Research

Here are some of the applications of evaluating research:

  • Identifying reliable sources : By evaluating research, researchers, students, and other professionals can identify the most reliable sources of information to use in their work. They can determine the quality of research studies, including the methodology, sample size, data analysis, and conclusions.
  • Validating findings: Evaluating research can help to validate findings from previous studies. By examining the methodology and results of a study, researchers can determine if the findings are reliable and if they can be used to inform future research.
  • Identifying knowledge gaps: Evaluating research can also help to identify gaps in current knowledge. By examining the existing literature on a topic, researchers can determine areas where more research is needed, and they can design studies to address these gaps.
  • Improving research quality : Evaluating research can help to improve the quality of future research. By examining the strengths and weaknesses of previous studies, researchers can design better studies and avoid common pitfalls.
  • Informing policy and decision-making : Evaluating research is crucial in informing policy and decision-making in many fields. By examining the evidence base for a particular issue, policymakers can make informed decisions that are supported by the best available evidence.
  • Enhancing education : Evaluating research is essential in enhancing education. Educators can use research findings to improve teaching methods, curriculum development, and student outcomes.

Purpose of Evaluating Research

Here are some of the key purposes of evaluating research:

  • Determine the reliability and validity of research findings : By evaluating research, researchers can determine the quality of the study design, data collection, and analysis. They can determine whether the findings are reliable, valid, and generalizable to other populations.
  • Identify the strengths and weaknesses of research studies: Evaluating research helps to identify the strengths and weaknesses of research studies, including potential biases, confounding factors, and limitations. This information can help researchers to design better studies in the future.
  • Inform evidence-based decision-making: Evaluating research is crucial in informing evidence-based decision-making in many fields, including healthcare, education, and public policy. Policymakers, educators, and clinicians rely on research evidence to make informed decisions.
  • Identify research gaps : By evaluating research, researchers can identify gaps in the existing literature and design studies to address these gaps. This process can help to advance knowledge and improve the quality of research in a particular field.
  • Ensure research ethics and integrity : Evaluating research helps to ensure that research studies are conducted ethically and with integrity. Researchers must adhere to ethical guidelines to protect the welfare and rights of study participants and to maintain the trust of the public.

Characteristics Evaluating Research

Characteristics Evaluating Research are as follows:

  • Research question/hypothesis: A good research question or hypothesis should be clear, concise, and well-defined. It should address a significant problem or issue in the field and be grounded in relevant theory or prior research.
  • Study design: The research design should be appropriate for answering the research question and be clearly described in the study. The study design should also minimize bias and confounding variables.
  • Sampling : The sample should be representative of the population of interest and the sampling method should be appropriate for the research question and study design.
  • Data collection : The data collection methods should be reliable and valid, and the data should be accurately recorded and analyzed.
  • Results : The results should be presented clearly and accurately, and the statistical analysis should be appropriate for the research question and study design.
  • Interpretation of results : The interpretation of the results should be based on the data and not influenced by personal biases or preconceptions.
  • Generalizability: The study findings should be generalizable to the population of interest and relevant to other settings or contexts.
  • Contribution to the field : The study should make a significant contribution to the field and advance our understanding of the research question or issue.

Advantages of Evaluating Research

Evaluating research has several advantages, including:

  • Ensuring accuracy and validity : By evaluating research, we can ensure that the research is accurate, valid, and reliable. This ensures that the findings are trustworthy and can be used to inform decision-making.
  • Identifying gaps in knowledge : Evaluating research can help identify gaps in knowledge and areas where further research is needed. This can guide future research and help build a stronger evidence base.
  • Promoting critical thinking: Evaluating research requires critical thinking skills, which can be applied in other areas of life. By evaluating research, individuals can develop their critical thinking skills and become more discerning consumers of information.
  • Improving the quality of research : Evaluating research can help improve the quality of research by identifying areas where improvements can be made. This can lead to more rigorous research methods and better-quality research.
  • Informing decision-making: By evaluating research, we can make informed decisions based on the evidence. This is particularly important in fields such as medicine and public health, where decisions can have significant consequences.
  • Advancing the field : Evaluating research can help advance the field by identifying new research questions and areas of inquiry. This can lead to the development of new theories and the refinement of existing ones.

Limitations of Evaluating Research

Limitations of Evaluating Research are as follows:

  • Time-consuming: Evaluating research can be time-consuming, particularly if the study is complex or requires specialized knowledge. This can be a barrier for individuals who are not experts in the field or who have limited time.
  • Subjectivity : Evaluating research can be subjective, as different individuals may have different interpretations of the same study. This can lead to inconsistencies in the evaluation process and make it difficult to compare studies.
  • Limited generalizability: The findings of a study may not be generalizable to other populations or contexts. This limits the usefulness of the study and may make it difficult to apply the findings to other settings.
  • Publication bias: Research that does not find significant results may be less likely to be published, which can create a bias in the published literature. This can limit the amount of information available for evaluation.
  • Lack of transparency: Some studies may not provide enough detail about their methods or results, making it difficult to evaluate their quality or validity.
  • Funding bias : Research funded by particular organizations or industries may be biased towards the interests of the funder. This can influence the study design, methods, and interpretation of results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Scope of the Research

Scope of the Research – Writing Guide and...

Data Analysis

Data Analysis – Process, Methods and Types

Research Summary

Research Summary – Structure, Examples and...

References in Research

References in Research – Types, Examples and...

Limitations in Research

Limitations in Research – Types, Examples and...

Purpose of Research

Purpose of Research – Objectives and Applications

Learn how to develop a ToC-based evaluation

http://A%20group%20of%20ideas%20discussing%20and%20exploring%20ideas

  • Evaluation Goals and Planning
  • Identify ToC-Based Questions
  • Choose an Evaluation Design
  • Select Measures

Choose an Appropriate Evaluation Design

Once you’ve identified your questions, you can select an appropriate evaluation design. Evaluation design refers to the overall approach to gathering information or data to answer specific research questions.

There is a spectrum of research design options—ranging from small-scale feasibility studies (sometimes called road tests) to larger-scale studies that use advanced scientific methodology. Each design option is suited to answer particular research questions.

The appropriate design for a specific project depends on what the project team hopes to learn from a particular implementation and evaluation cycle. Generally, as projects and programs move from small feasibility tests to later stage studies, methodological rigor increases.

Evaluation Design Studies Graphic

In other words, you’ll use more advanced tools and processes that allow you to be more confident in your results. Sample sizes get larger, the number of measurement tools increases, and assessments are often standardized and norm-referenced (designed to compare an individual’s score to a particular population).

In the IDEAS Framework, evaluation is an ongoing, iterative process. The idea is to investigate your ToC one domain at a time, beginning with program strategies and gradually expanding your focus until you’re ready to test the whole theory. Returning to the domino metaphor, we want to see if each domino in the chain is falling the way we expect it to.

Feasibility Study

Begin by asking:

“Are the program strategies feasible and acceptable?”

If you’re designing a program from scratch and implementing it for the first time, you’ll almost always need to begin by establishing feasibility and acceptability. However, suppose you’ve been implementing a program for some time, even without a formal evaluation. In that case, you may have already established feasibility and acceptability simply by demonstrating that the program is possible to implement and that participants feel it’s a good fit. If that’s the case, you might be able to skip over this step, so to speak, and turn your attention to the impact on targets, which we’ll go over in more detail below. On the other hand, for a long-standing program being adapted for a new context or population, you may need to revisit its feasibility and acceptability.

The appropriate evaluation design for answering questions about feasibility and acceptability is typically a feasibility study with a relatively small sample and a simple data collection process.

In this phase, you would collect data on program strategies, including:

  • Fidelity data (is the program being implemented as intended?)
  • Feedback from participants and program staff (through surveys, focus groups, and interviews)
  • Information about recruitment and retention
  • Participant demographics (to learn about who you’re serving and whether you’re serving who you intended to serve)

Through fast-cycle iteration, you can use what you learn from a feasibility study to improve the program strategies.

Pilot Study

Once you have evidence to suggest that your strategies are feasible and acceptable, you can take the next step and turn your attention to the impact on targets by asking:

“Is there evidence to suggest that the targets are changing in the anticipated direction?”

The appropriate evaluation design to begin to investigate the impact on targets is usually a pilot study. With a somewhat larger sample and more complex design, pilot studies often gather information from participants before and after they participate in the program. In this phase, you would collect data on program strategies and targets. Note that in each phase, the focus of your evaluation expands to include more domains of your ToC. In a pilot study, in addition to data on targets (your primary focus), you’ll want to gather information on strategies to continue looking at feasibility and acceptability.

In this phase, you would collect data on:

  • Program strategies

Later Stage Study

Once you’ve established feasibility and acceptability and have evidence to suggest your targets are changing in the expected direction, you’re ready to ask:

“Is there evidence to support our full theory of change?”

In other words, you’ll simultaneously ask:

  • Do our strategies continue to be feasible and acceptable?
  • Are the targets changing in the anticipated direction?
  • Are the outcomes changing in the anticipated direction?
  • Do the moderators help explain variability in impact?

The appropriate evaluation design for investigating your entire theory of change is a later-stage study, with a larger sample and more sophisticated study design, often including some kind of control or comparison group. In this phase, you would collect data on all domains of your ToC: strategies, targets, outcomes, and moderators

Common Questions

There may be cases where it does make sense to skip the earlier steps and move right to a later-stage study. But in most cases, investigating your ToC one domain at a time has several benefits. First, later-stage studies are typically costly in terms of time and money. By starting with a relatively small and low-cost feasibility study and working toward more rigorous evaluation, you can ensure that time and money will be well spent on a program that’s more likely to be effective. If you were to skip ahead to a later-stage study, you might be disappointed to find that your outcomes aren’t changing because of problems with feasibility and acceptability, or because your targets aren’t changing (or aren’t changing enough).

Many programs do gather data on outcomes without looking at strategies and targets. One challenge with that approach is that if you don’t see evidence of impact on program outcomes, you won’t be able to know why that’s the case. Was there a problem with feasibility, and the people implementing the program weren’t able to deliver the program as it was intended? Was there an issue with acceptability, and participants tended to skip sessions or drop out of the program early? Maybe the implementation went smoothly, but the strategies just weren’t effective at changing your targets, and that’s where the causal chain broke down. Unless you gather data on strategies and targets, it’s hard to know what went wrong and what you can do to improve the program’s effectiveness.  

You are here

Evaluating research projects.

evaluation evaluation

These Guidelines are neither a handbook on evaluation, nor a manual on how to evaluate, but a guide for the development, adaptation, or assessment of evaluation methods. They are a reference and a guide of good practice about building a specific guide for evaluating a given situation.

This page's content, needless to remind, is aimed at the authors of a specific guide : in the present case a guide for evaluating research projects. The specific guide's authors will pick from this material what is relevant for their needs and situation.

Objectives of evaluating research projects

The two most common situations faced by evaluators of development research projects are ex ante evaluations and ex post evaluations. In a few cases an intermediate evaluation may be performed, also sometimes called a "mid-term" evaluation. The formulation of the objectives in the specific guide will obviously depend on the situation, on the needs of the stakeholders , but also on the researcher's environment and on ethical considerations .

Ex ante evaluation refers to the evaluation of a project proposal, for example for deciding whether or not to finance it, or to provide scientific support.

Ex post evaluation is conducted after a research is completed, again for a variety of reasons such as deciding to publish or to apply the results, to grant an award or a fellowship to the author(s), or to build a new research along a similar line.

An intermediate evaluation is aimed basically at helping to decide to go on, or to reorient the course of the research.

Such objectives are examined in detail below, in the pages on evaluation of research projects ex ante and on evaluation of projects ex post . A final section deals briefly with intermediate evaluation.

Importance of project evaluation

Evaluating research projects is a fundamental dimension in the evaluation of development research, for basically two reasons:

  • many of our evaluation concepts and practices are derived from our experience with research projects,
  • evaluation of projects is essential for achieving our long term goal of maintaining and improving the quality of development research - and particularly of strengthening research capacity .

Dimensions of the evaluation of development research projects

Scientific quality is a basic requirement for all scientific research projects, and the role of publications is here determinant. Such is obviously the case of ex post evaluation, but publications are also necessary in the case of ex ante situations, where the evaluator needs to trust to a certain extent the proposal's authors, and will largely take into account their past publications.

For more details see the page on evaluation of scientific publications and the annexes on scientific quality and on valorisation .

While scientific quality is a necessary dimension in each evaluation of a development research project, it is not sufficient. An equally indispensable dimension is relevance to development.

Other dimensions will be justified by the context, the evaluation's objectives, the evaluation sponsor's requirements, etc.

  • Send a comment

Search form

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • 10 Research Question Examples to Guide Your Research Project

10 Research Question Examples to Guide your Research Project

Published on October 30, 2022 by Shona McCombes . Revised on October 19, 2023.

The research question is one of the most important parts of your research paper , thesis or dissertation . It’s important to spend some time assessing and refining your question before you get started.

The exact form of your question will depend on a few things, such as the length of your project, the type of research you’re conducting, the topic , and the research problem . However, all research questions should be focused, specific, and relevant to a timely social or scholarly issue.

Once you’ve read our guide on how to write a research question , you can use these examples to craft your own.

Research question Explanation
The first question is not enough. The second question is more , using .
Starting with “why” often means that your question is not enough: there are too many possible answers. By targeting just one aspect of the problem, the second question offers a clear path for research.
The first question is too broad and subjective: there’s no clear criteria for what counts as “better.” The second question is much more . It uses clearly defined terms and narrows its focus to a specific population.
It is generally not for academic research to answer broad normative questions. The second question is more specific, aiming to gain an understanding of possible solutions in order to make informed recommendations.
The first question is too simple: it can be answered with a simple yes or no. The second question is , requiring in-depth investigation and the development of an original argument.
The first question is too broad and not very . The second question identifies an underexplored aspect of the topic that requires investigation of various  to answer.
The first question is not enough: it tries to address two different (the quality of sexual health services and LGBT support services). Even though the two issues are related, it’s not clear how the research will bring them together. The second integrates the two problems into one focused, specific question.
The first question is too simple, asking for a straightforward fact that can be easily found online. The second is a more question that requires and detailed discussion to answer.
? dealt with the theme of racism through casting, staging, and allusion to contemporary events? The first question is not  — it would be very difficult to contribute anything new. The second question takes a specific angle to make an original argument, and has more relevance to current social concerns and debates.
The first question asks for a ready-made solution, and is not . The second question is a clearer comparative question, but note that it may not be practically . For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

Note that the design of your research question can depend on what method you are pursuing. Here are a few options for qualitative, quantitative, and statistical research questions.

Type of research Example question
Qualitative research question
Quantitative research question
Statistical research question

Other interesting articles

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, October 19). 10 Research Question Examples to Guide your Research Project. Scribbr. Retrieved September 18, 2024, from https://www.scribbr.com/research-process/research-question-examples/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, writing strong research questions | criteria & examples, how to choose a dissertation topic | 8 steps to follow, evaluating sources | methods & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Environ Res Public Health

Logo of ijerph

Research Project Evaluation—Learnings from the PATHWAYS Project Experience

Aleksander galas.

1 Epidemiology and Preventive Medicine, Jagiellonian University Medical College, 31-034 Krakow, Poland; [email protected] (A.G.); [email protected] (A.P.)

Aleksandra Pilat

Matilde leonardi.

2 Fondazione IRCCS, Neurological Institute Carlo Besta, 20-133 Milano, Italy; [email protected]

Beata Tobiasz-Adamczyk

Background: Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). The PATHWAYS project involved multiple countries and multi-cultural aspects of re/integrating chronically ill patients into labor markets in different countries. This paper describes key project’s evaluation issues including: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits and presents the advantages of a continuous monitoring. Methods: Project evaluation tool to assess structure and resources, process, management and communication, achievements, and outcomes. The project used a mixed evaluation approach and included Strengths (S), Weaknesses (W), Opportunities (O), and Threats (SWOT) analysis. Results: A methodology for longitudinal EU projects’ evaluation is described. The evaluation process allowed to highlight strengths and weaknesses and highlighted good coordination and communication between project partners as well as some key issues such as: the need for a shared glossary covering areas investigated by the project, problematic issues related to the involvement of stakeholders from outside the project, and issues with timing. Numerical SWOT analysis showed improvement in project performance over time. The proportion of participating project partners in the evaluation varied from 100% to 83.3%. Conclusions: There is a need for the implementation of a structured evaluation process in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every multidisciplinary research projects.

1. Introduction

Over the last few decades, a strong discussion on the role of the evaluation process in research has developed, especially in interdisciplinary or multidimensional research [ 1 , 2 , 3 , 4 , 5 ]. Despite existing concepts and definitions, the importance of the role of evaluation is often underestimated. These dismissive attitudes towards the evaluation process, along with a lack of real knowledge in this area, demonstrate why we need research evaluation and how research evaluation can improve the quality of research. Having firm definitions of ‘evaluation’ can link the purpose of research, general questions associated with methodological issues, expected results, and the implementation of results to specific strategies or practices.

Attention paid to projects’ evaluation shows two concurrent lines of thought in this area. The first is strongly associated with total quality management practices and operational performance; the second focuses on the evaluation processes needed for public health research and interventions [ 6 , 7 ].

The design and implementation of process’ evaluations in fields different from public health have been described as multidimensional. According to Baranowski and Stables, process evaluation consists of eleven components: recruitment (potential participants for corresponding parts of the program); maintenance (keeping participants involved in the program and data collection); context (an aspect of environment of intervention); resources (the materials necessary to attain project goals); implementation (the extent to which the program is implemented as designed); reach (the extent to which contacts are received by the targeted group); barriers (problems encountered in reaching participants); exposure (the extent to which participants view or read material); initial use (the extent to which a participant conducts activities specified in the materials); continued use (the extent to which a participant continues to do any of the activities); contamination (the extent to which participants receive interventions from outside the program and the extent to which the control group receives the treatment) [ 8 ].

There are two main factors shaping the evaluation process. These are: (1) what is evaluated (whether the evaluation process revolves around project itself or the outcomes which are external to the project), and (2) who is an evaluator (whether an evaluator is internal or external to the project team and program). Although there are several existing gaps in current knowledge about the evaluation process of external outcomes, the use of a formal evaluation process of a research project itself is very rare.

To define a clear evaluation and monitoring methodology we performed different steps. The purpose of this article is to present experiences from the project evaluation process implemented in the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS project. The manuscript describes key project evaluation issues as: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits. The PATHWAYS project can be understood as a specific case study—presented through a multidimensional approach—and based on the experience associated with general evaluation, we can develop patterns of good practices which can be used in other projects.

1.1. Theoretical Framework

The first step has been the clear definition of what is an evaluation strategy or methodology . The term evaluation is defined by the Cambridge Dictionary as the process of judging something’s quality, importance, or value, or a report that includes this information [ 9 ] or in a similar way by the Oxford Dictionary as the making of a judgment about the amount, number, or value of something [ 10 ]; assessment and in the activity, it is frequently understood as associated with the end rather than with the process. Stufflebeam, in its monograph, defines evaluation as a study designed and conducted to assist some audience to assess an object’s merit and worth. Considering this definition, there are four categories of evaluation approaches: (1) pseudo-evaluation; (2) questions and/or methods-oriented evaluation; (3) improvement/accountability evaluation; (4) social agenda/advocacy evaluation [ 11 ].

In brief, considering Stufflebeam’s classification, pseudo-evaluations promote invalid or incomplete findings. This happens when findings are selectively released or falsified. There are two pseudo-evaluation types proposed by Stufflebeam: (1) public relations-inspired studies (studies which do not seek truth but gather information to solicit positive impressions of program), and (2) politically controlled studies (studies which seek the truth but inappropriately control the release of findings to right-to-know audiences).

The questions and/or methods-oriented approach uses rather narrow questions, which are oriented on operational objectives of the project. Questions oriented uses specific questions, which are of interest by accountability requirements or an expert’s opinions of what is important, while method oriented evaluations favor the technical qualities of program/process. The general concept of these two is that it is better to ask a few pointed questions well to get information on program merit and worth [ 11 ]. In this group, one may find the following evaluation types: (a) objectives-based studies: typically focus on whether the program objectives have been achieved through an internal perspective (by project executors); (b) accountability, particularly payment by results studies: stress the importance of obtaining an external, impartial perspective; (c) objective testing program: uses standardized, multiple-choice, norm-referenced tests; (d) outcome evaluation as value-added assessment: a recurrent evaluation linked with hierarchical gain score analysis; (e) performance testing: incorporates the assessment of performance (by written or spoken answers, or psychomotor presentations) and skills; (f) experimental studies: program evaluators perform a controlled experiment and contrast the outcomes observed; (g) management information system: provide information needed for managers to conduct their programs; (h) benefit-cost analysis approach: mainly sets of quantitative procedures to assess the full cost of a program and its returns; (i) clarification hearing: an evaluation of a trial in which role-playing evaluators competitively implement both a damning prosecution of a program—arguing that it failed, and a defense of the program—and arguing that it succeeded. Next, a judge hears arguments within the framework of a jury trial and controls the proceedings according to advance agreements on rules of evidence and trial procedures; (j) case study evaluation: focused, in-depth description, analysis, and synthesis of a particular program; (k) criticism and connoisseurship: certain experts in a given area do in-depth analysis and evaluation that could not be done in other way; (l) program theory-based evaluation: based on the theory beginning with another validated theory of how programs of a certain type within similar settings operate to produce outcomes (e.g., Health Believe Model, Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation and Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development - thus so called PRECEDE-PROCEED model proposed by L. W. Green or Stage of Change Theory by Prochaska); (m) mixed method studies: include different qualitative and quantitative methods.

The third group of methods considered in evaluation theory are improvement/accountability-oriented evaluation approaches. Among these, there are the following: (a) decision/accountability oriented studies: emphasizes that evaluation should be used proactively to help improve a program and retroactively to assess its merit and worth; (b) consumer-oriented studies: wherein the evaluator is a surrogate consumer who draws direct conclusions about the evaluated program; (c) accreditation/certification approach: an accreditation study to verify whether certification requirements have been/are fulfilled.

Finally, a social agenda/advocacy evaluation approach focuses on the assessment of difference, which is/was intended to be the effect of the program evaluation. The evaluation process in this type of approach works in a loop, starting with an independent evaluator who provides counsel and advice towards understanding, judging and improving programs as evaluations to serve the client’s needs. In this group, there are: (a) client-centered studies (or responsive evaluation): evaluators work with, and for, the support of diverse client groups; (b) constructivist evaluation: evaluators are authorized and expected to maneuver the evaluation to emancipate and empower involved and affected disenfranchised people; (c) deliberative democratic evaluation: evaluators work within an explicit democratic framework and uphold democratic principles in reaching defensible conclusions; (d) utilization-focused evaluation: explicitly geared to ensure that program evaluations make an impact.

1.2. Implementation of the Evaluation Process in the EU PATHWAYS Project

The idea to involve the evaluation process as an integrated goal of the PATHWAYS project was determined by several factors relating to the main goal of the project, defined as a special intervention to existing attitudes to occupational mobility and work activity reintegration of people of working age, suffering from specific chronic conditions into the labor market in 12 European Countries. Participating countries had different cultural and social backgrounds and different pervasive attitudes towards people suffering from chronic conditions.

The components of evaluation processes previously discussed proved helpful when planning the PATHWAYS evaluation, especially in relation to different aspects of environmental contexts. The PATHWAYS project focused on chronic conditions including: mental health issues, neurological diseases, metabolic disorders, musculoskeletal disorders, respiratory diseases, cardiovascular diseases, and persons with cancer. Within this group, the project found a hierarchy of patients and social and medical statuses defined by the nature of their health conditions.

According to the project’s monitoring and evaluation plan, the evaluation process followed specific challenges defined by the project’s broad and specific goals and monitored the progress of implementing key components by assessing the effectiveness of consecutive steps and identifying conditions supporting the contextual effectiveness. Another significant aim of the evaluation component on the PATHWAYS project was to recognize the value and effectiveness of using a purposely developed methodology—consisting of a wide set of quantitative and qualitative methods. The triangulation of methods was very useful and provided the opportunity to develop a multidimensional approach to the project [ 12 ].

From the theoretical framework, special attention was paid to the explanation of medical, cultural, social and institutional barriers influencing the chance of employment of chronically ill persons in relation to the characteristics of the participating countries.

Levels of satisfaction with project participation, as well as with expected or achieved results and coping with challenges on local–community levels and macro-social levels, were another source of evaluation.

In the PATHWAYS project, the evaluation was implemented for an unusual purpose. This quasi-experimental design was developed to assess different aspects of the multidimensional project that used a variety of methods (systematic review of literature, content analysis of existing documents, acts, data and reports, surveys on different country-levels, deep interviews) in the different phases of the 3 years. The evaluation monitored each stage of the project and focused on process implementation, with the goal of improving every step of the project. The evaluation process allowed to perform critical assessments and deep analysis of benefits and shortages of the specific phase of the project.

The purpose of the evaluation was to monitor the main steps of the Project, including the expectations associated with a multidimensional, methodological approach used by PATHWAYS partners, as well as improving communication between partners, from different professional and methodological backgrounds involved in the project in all its phases, so as to avoid errors in understanding the specific steps as well as the main goals.

2. Materials and Methods

The paper describes methodology and results gathered during the implementation of Work Package 3, Evaluation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the PATHWAYS) project. The work package was intended to keep internal control over the run of the project to achieve timely fulfillment of tasks, milestones, and purpose by all project partners.

2.1. Participants

The project consortium involved 12 partners from 10 different European countries. There were academics (representing cross-disciplinary research including socio-environmental determinants of health, clinicians), institutions actively working for the integration of people with chronic and mental health problems and disability, educational bodies (working in the area of disability and focusing on inclusive education), national health institutes (for rehabilitation of patients with functional and workplace impairments), an institution for inter-professional rehabilitation at a country level (coordinating medical, social, educational, pre-vocational and vocational rehabilitation), a company providing patient-centered services (in neurorehabilitation). All the partners represented vast knowledge and high-level expertise in the area of interest and all agreed with the World Health Organization’s (WHO) International Classification of Functioning, Disability and Health-ICF and of the biopsychosocial model of health and functioning. The consortium was created based on the following criteria:

  • vision, mission, and activities in the area of project purposes,
  • high level of experience in the area (supported by publications) and in doing research (being involved in international projects, collaboration with the coordinator and/or other partners in the past),
  • being able to get broad geographical, cultural and socio-political representation from EU countries,
  • represent different stakeholder type in the area.

2.2. Project Evaluation Tool

The tool development process involved the following steps:

  • (1) Review definitions of ‘evaluation’ and adopt one which consorts best with the reality of public health research area;
  • (2) Review evaluation approaches and decide on the content which should be applicable in the public health research;
  • (3) Create items to be used in the evaluation tool;
  • (4) Decide on implementation timing.

According to the PATHWAYS project protocol, an evaluation tool for the internal project evaluation was required to collect information about: (1) structure and resources; (2) process, management and communication; (3) achievements and/or outcomes and (4) SWOT analysis. A mixed methods approach was chosen. The specific evaluation process purpose and approach are presented in Table 1 .

Evaluation purposes and approaches adopted for the purpose in the PATHWAYS project.

Purpose of the Evaluation in the PATHWAYS ProjectApproach AdoptedNo If Items (Questions) Created *
question oriented and management information system6
question oriented and management information system3
question oriented and management information system and improvement/accountability oriented10
objective-based and outcome evaluation as value-added assessment and client-centered3
objective based and case-study and accountability oriented10

* Open ended questions are not counted here.

The tool was prepared following different steps. In the paragraph to assess structure and resources, there were questions about the number of partners, professional competences, assigned roles, human, financial and time resources, defined activities and tasks, and the communication plan. The second paragraph, process, management and communication, collected information about the coordination process, consensus level, quality of communication among coordinators, work package leaders, and partners, whether project was carried out according to the plan, involvement of target groups, usefulness of developed materials, and any difficulties in the project realization. Finally, the paragraph achievements and outcomes gathered information about project specific activities such as public-awareness raising, stakeholder participation and involvement, whether planned outcomes (e.g., milestones) were achieved, dissemination activities, and opinions on whether project outcomes met the needs of the target groups. Additionally, it was decided to implement SWOT analysis as a part of the evaluation process. SWOT analysis derives its name from the evaluation of Strengths (S), Weaknesses (W), Opportunities (O), and Threats (T) faced by a company, industry or, in this case, project consortium. SWOT analysis comes from the business world and was developed in the 1960s at Harvard Business School as a tool for improving management strategies among companies, institutions, or organization [ 13 , 14 ]. However, in recent years, SWOT analysis has been adapted in the context of research to improve programs or projects.

For a better understanding of SWOT analysis, it is important to highlight the internal features of Strengths and Weaknesses, which are considered controllable. Strengths refers to work inside the project such as capabilities and competences of partners, whereas weaknesses refers to aspects, which needs improvement, such as resources. Conversely, Opportunities and Threats are considered outside factors and uncontrollable [ 15 ]. Opportunities are maximized to fit the organization’s values and resources and threats are the factors that the organization is not well equipped to deal with [ 9 ].

The PATHWAYS project members participated in SWOT analyses every three months. They answered four open questions about strengths, weaknesses, opportunities, and threats identified in evaluated period (last three months). They were then asked to assess those items on 10-point scale. The sample included results from nine evaluated periods from partners from ten different countries.

The tool for the internal evaluation of the PATHWAYS project is presented in Appendix A .

2.3. Tool Implementation and Data Collection

The PATHWAYS on-going evaluation took place at three-month intervals. It consisted of on-line surveys, and every partner assigned a representative who was expected to have good knowledge on the progress of project’s progress. The structure and resources were assessed only twice, at the beginning (3rd month) and at the end (36th month) of the project. The process, management, and communication questions, as well as SWOT analysis questions, were asked every three months. The achievements and outcomes questions started after the first year of implementation (i.e., after 15th month), and some of items in this paragraph, (results achieved, whether project outcomes meet the needs of the target groups and published regular publications), were only implemented at the end of the project (36th month).

2.4. Evaluation Team

The evaluation team was created from professionals with different backgrounds and extensive experience in research methodology, sociology, social research methods and public health.

The project started in 2015 and was carried out for 36 months. There were 12 partners in the PATHWAYS project, representing Austria, Belgium, Czech Republic, Germany, Greece, Italy, Norway, Poland, Slovenia and Spain and a European Organization. The on-line questionnaire was sent to all partners one week after the specified period ended and project partners had at least 2 weeks to fill in/answer the survey. Eleven rounds of the survey were performed.

The participation rate in the consecutive evaluation surveys was 11 (91.7%), 12 (100%), 12 (100%), 11 (91.7%), 10 (83.3%), 11 (91.7%), 11 (91.7%), 10 (83.3%), and 11 (91.7%) till the project end. Overall, it rarely covered the whole group, which may have resulted from a lack of coercive mechanisms at a project level to answer project evaluation questions.

3.1. Evaluation Results Considering Structure and Resources (3rd Month Only)

A total of 11 out of 12 project partners participated in the first evaluation survey. The structure and resources of the project were not assessed by the project coordinator and as such, the results in represent the opinions of the other 10 participating partners. The majority of respondents rated the project consortium as having at least adequate professional competencies. In total eight to nine project partners found human, financial and time resources ‘just right’ and the communication plan ‘clear’. More concerns were observed regarding the clarity of tasks, what is expected from each partner, and how specific project activities should be or were assigned.

3.2. Evaluation Results Considering Process, Management and Communication

The opinions about project coordination, communication processes (with coordinator, between WP leaders, and between individual partners/researchers) were assessed as ‘good’ and ‘very good’, along the whole period. There were some issues, however, when it came to the realization of specific goals, deliverables, or milestones of the project.

Given the broad scope of the project and participating partner countries, we created a glossary to unify the common terms used in the project. It was a challenge, as during the project implementation there were several discussions and inconsistencies in the concepts provided ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g001.jpg

Partners’ opinions about the consensus around terms (shared glossary) in the project consortium across evaluation waves (W1—after 3-month realization period, and at 3-month intervals thereafter).

Other issues, which appeared during project implementation, were recruitment of, involvement with, and cooperation with stakeholders. There was a range of groups to be contacted and investigated during the project including individual patients suffering from chronic conditions, patients’ advocacy groups and national governmental organizations, policy makers, employers, and international organizations. It was found that during the project, the interest and the involvement level of the aforementioned groups was quite low and difficult to achieve, which led to some delays in project implementation ( Figure 2 ). This was the main cause of smaller percentages of “what was expected to be done in designated periods of project realization time”. The issue was monitored and eliminated by intensification of activities in this area ( Figure 3 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g002.jpg

Partners’ reports on whether the project had been carried out according to the plan ( a ) and the experience of any problems in the process of project realization ( b ) (W1—after 3-month realization period, and at 3-month intervals thereafter).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g003.jpg

Partners’ reports on an approximate estimation (in percent) of the project plan implementation (what has been done according to the plan) ( a ) and the involvement of target groups (W1—after 3-month realization period, and at 3-month intervals thereafter) ( b ).

3.3. Evaluation Results Considering Achievements and Outcomes

The evaluation process was prepared to monitor project milestones and deliverables. One of the PATHWAYS project goals was to raise public awareness surrounding the reintegration of chronically ill people into the labor market. This was assessed subjectively by cooperating partners and only half (six) felt they achieved complete success on that measure. The evaluation process monitored planned outcomes according to: (1) determination of strategies for awareness rising activities, (2) assessment of employment-related needs, and (3) development of guidelines (which were planned by the project). The majority of partners completely fulfilled this task. Furthermore, the dissemination process was also carried out according to the plan.

3.4. Evaluation Results from SWOT

3.4.1. strengths.

Amongst the key issues identified across all nine evaluated periods ( Figure 4 ), the “strong consortium” was highlighted as the most important strength of the PATHWAYS project. The most common arguments for this assessment were the coordinator’s experience in international projects, involvement of interdisciplinary experts who could guarantee a holistic approach to the subject, and a highly motivated team. This was followed by the uniqueness of the topic. Project implementers pointed to the relevance of the analyzed issues, which are consistent with social needs. They also highlighted that this topic concerned an unexplored area in employment policy. The interdisciplinary and international approach was also emphasized. According to the project implementers, the international approach allowed mapping of vocational and prevocational processes among patients with chronic conditions and disability throughout Europe. The interdisciplinary approach, on the other hand, enabled researchers to create a holistic framework that stimulates innovation by thinking across boundaries of particular disciplines—especially as the PATHWAYS project brings together health scientists from diverse fields (physicians, psychologists, medical sociologists, etc.) from ten European countries. This interdisciplinary approach is also supported by the methodology, which is based on a mixed-method approach (qualitative and quantitative data). The involvement of an advocacy group was another strength identified by the project implementers. It was stressed that the involvement of different types of stakeholders increased validity and social triangulation. It was also assumed that it would allow for the integration of relevant stakeholders. The last strength, the usefulness of results, was identified only in the last two evaluation waves, when the first results had been measured.

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g004.jpg

SWOT Analysis—a summary of main issues reported by PATHWAYS project partners.

3.4.2. Weaknesses

The survey respondents agreed that the main weaknesses of the project were time and human resources. The subject of the PATHWAYS project turned out to be very broad, and therefore the implementers pointed to the insufficient human resources and inadequate time for the implementation of individual tasks, as well as the project overall. This was related to the broad categories of chronic diseases chosen for analysis in the project. On one hand, the implementers complained about the insufficient number of chronic diseases taken into account in the project. On the other hand, they admitted that it was not possible to cover all chronic diseases in details. The scope of the project was reported as another weakness. In the successive waves of evaluation, the implementers more often pointed out that it was hard to cover all relevant topics.

Nevertheless, some of the major weaknesses reported during the project evaluation were methodological problems. Respondents pointed to problems with the implementation of tasks on a regular basis. For example, survey respondents highlighted the need for more open questions in the survey that the questionnaire was too long or too complicated, that the tools were not adjusted for relevancy in the national context, etc. Another issue was that the working language was English, but all tools or survey questionnaire needed to be translated into different languages and this issue was not always considered by the Commission in terms of timing and resources. This issue could provide useful for further projects, as well as for future collaborations.

The difficulties of involving stakeholders were reported, especially during tasks, which required their active commitment, like participation in in-depth interviews or online questionnaires. Interestingly, the international approach was considered both strength and weakness of the project. The implementers highlighted the complexity of making comparisons between health care and/or social care in different countries. The budget was also identified as a weakness by the project implementers. More funds obtained from the partners could have helped PATHWAYS enhance dissemination and stakeholders’ participation.

3.4.3. Opportunities

A list of seven issues within the opportunities category reflects the positive outlook of survey respondents from the beginning of the project to its final stage. Social utility was ranked as the top opportunity. The implementers emphasized that the project could fill a gap between the existing solutions and the real needs of people with chronic diseases and mental disorders. The implementers also highlighted the role of future recommendations, which would consist of proposed solutions for professionals, employees, employers, and politicians. These advantages are strongly associated with increasing awareness of employment situations of people with chronic diseases in Europe and the relevance of the problem. Alignment with policies, strategies, and stakeholders’ interests were also identified as opportunities. The topic is actively discussed on the European and national level, and labor market and employment issues are increasingly emphasized in the public discourse. What is more relevant is that the European Commission considers the issue crucial, and the results of the project are in line with its requests for the future. The implementers also observed increasing interest from the stakeholders, which is very important for the future of the project. Without doubt, the social network of project implementers provides a huge opportunity for the sustainability of results and the implementation of recommendations.

3.4.4. Threats

Insufficient response from stakeholders was the top perceived threat selected by survey respondents. The implementers indicated that insufficient involvement of stakeholders resulted in low response rates in the research phase, which posed a huge threat for the project. The interdisciplinary nature of the PATHWAYS project was highlighted as a potential threat due to differences in technical terminology and different systems of regulating the employment of persons with reduced work capacity in each country, as well as many differences in the legislation process. Insufficient funding and lack of existing data were identified as the last two threats.

One novel aspect of the evaluation process in the PATHWAYS project was a numerical SWOT analysis. Participants were asked to score strengths, weaknesses, opportunities, and threats from 0 (meaning the lack of/no strengths, weaknesses) to 10 (meaning a lot of ... several ... strengths, weaknesses). This concept enabled us to get a subjective score of how partners perceive the PATHWAYS project itself and the performance of the project, as well as how that perception changes over time. Data showed an increase in both strengths and opportunities and a decrease in weaknesses and threats over the course of project implementation ( Figure 5 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g005.jpg

Numerical SWOT, combined, over a period of 36 months of project realization (W1—after 3-month realization period, and at 3-month intervals thereafter).

4. Discussion

The need for project evaluation was born from an industry facing challenges regarding how to achieve market goals in more efficient way. Nowadays, every process, including research project implementation, faces questions regarding its effectiveness and efficiency.

The challenge of a research project evaluation is that the majority of research projects are described as unique, although we believe several projects face similar issues and challenges as those observed in the PATHWAYS project.

The main objectives of the PATHWAYS Project were (a) to identify integration and re-integration strategies that are available in Europe and beyond for individuals with chronic diseases and mental disorders experiencing work-related problems (such as unemployment, absenteeism, reduced productivity, stigmatization), (b) to determine their effectiveness, (c) to assess the specific employment-related needs of those people, and (d) to develop guidelines supporting the implementation of effective strategies of professional integration and reintegration. The broad area of investigation, partial knowledge in the field, diversity of determinants across European Union countries, and involvement with stakeholders representing different groups caused several challenges in the project, including:

  • problem : uncovered, challenging, demanding (how to encourage stakeholders to participate, share experiences),
  • diversity : different European regions; different determinants: political, social, cultural; different public health and welfare systems; differences in law regulations; different employment policies and issues in the system,
  • multidimensionality of research: some quantitative, qualitative studies including focus groups, opinions from professionals, small surveys in target groups (workers with chronic conditions).

The challenges to the project consequently led to several key issues, which should be taken, into account during project realization:

  • partners : with their own expertise and interests; different expectations; different views on what is more important to focused on and highlighted;
  • issues associated with unification : between different countries with different systems (law, work-related and welfare definitions, disability classification, others);
  • coordination : as multidimensionality of the project may have caused some research activities by partners to move in a wrong direction (data, knowledge which is not needed for the project purposes), a lack of project vision in (some) partners might postpone activities through misunderstanding;
  • exchange of information : multidimensionality, the fact that different tasks were accomplished by different centers and obstacles to data collection required good communication methods and smooth exchange of information.

Identified Issues and Implemented Solutions

There were several issues identified through the semi-internal evaluation process performed during the project. Those, which might be more relevant for the project realization, are mentioned in the Table 2 .

Issues identified by the evaluation process and solutions implemented.

IssueCommentSolution/s
Clarity of tasks, what is expected from each partner, and how specific project activities are assigned Each partner had a final copy of the PATHWAYS project proposal with a description of activities in each WP. Next specific tasks planned in each WP were presented, discussed and explained during the kick-off meeting
Project tasks and WP coordinators agreed before the submission of the project for funding. Timetable was intensively discussed and agreed. The timing, deliverables and milestones were put into the Gantt Chart. All issues were discussed and clarified during the kick-off meetingThe main doubts about project resources and timing appeared during the realization of the project and were mainly caused by low levels of stakeholder participation and involvement. Successful strategies were presented by other participants Coordinators were expected to be monitored carefully during the project realization
GlossaryThere was not a specific/named task to prepare a common glossary during project implementation. It came as a consequence of variability in terms of definitions regarding disability, labor sector, low regulations, and worker rights across participating European countries
Broad area of research (broad purposes, several diseases) meaning some partners had no expertise in every disease and reintegration strategiesThe research team was created to get representatives of different expert groups in the investigated area

The PATHWAYS project included diverse partners representing different areas of expertise and activity (considering broad aspect of chronic diseases, decline in functioning and of disability, and its role in a labor market) in different countries and social security systems, which caused a challenge when developing a common language to achieve effective communication and better understanding of facts and circumstances in different countries. The implementation of continuous project process monitoring, and proper adjustment, enabled the team to overcome these challenges.

The evaluation tool has several benefits. First, it covers all key areas of the research project including structure and available resources, the run of the process, quality and timing of management and communication, as well as project achievements and outcomes. Continuous evaluation of all of these areas provides in-depth knowledge about project performance. Second, the implementation of SWOT tool provided opportunities to share out good and bad experiences by all project partners, and the use of a numerical version of SWOT provided a good picture about inter-relations strengths—weaknesses and opportunities—threats in the project and showed the changes in their intensity over time. Additionally, numerical SWOT may verify whether perception of a project improves over time (as was observed in the PATHWAYS project) showing an increase in strengths and opportunities and a decrease in weaknesses and threats. Third, the intervals in which partners were ‘screened’ by the evaluation questionnaire seems to be appropriate, as it was not very demanding but frequent enough to diagnose on-time some issues in the project process.

The experiences with the evaluation also revealed some limitations. There were no coercive mechanisms for participation in the evaluation questionnaires, which may have caused a less than 100% response rate in some screening surveys. Practically, that was not a problem in the PATHWAYS project. Theoretically, however, this might lead to unrevealed problems, as partners experiencing troubles might not report them. Another point is asking about quality of the consortium to the project coordinator, which has no great value (the consortium is created by the coordinator in the best achievable way and it is hard to expect other comments especially at the beginning of the project). Regarding the tool itself, the question Could you give us approximate estimation (in percent) of the project plan realization (what has been done according to the plan)? was expected to collect information about the project partners collecting data on what has been done out of what should be done during each evaluation period, meaning that 100% was what should be done in 3-month time in our project. This question, however, was slightly confusing at the beginning, as it was interpreted as percentage of all tasks and activities planned for the whole duration of the project. Additionally, this question only works provided that precise, clear plans on the type and timing of tasks were allocated to the project partners. Lastly, there were some questions with very low variability in answer types across evaluation surveys (mainly about coordination and communication). Our opinion is that if the project runs/performs in a smooth manner, one may think such questions useless, but in more complicated projects, these questions may reveal potential causes of troubles.

5. Conclusions

The PATHWAYS project experience shows a need for the implementation of structured evaluation processes in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every project and we suggest the following steps while doing multidisciplinary research:

  • Define area/s of interest (decision maker level/s; providers; beneficiaries: direct, indirect),
  • Identify 2–3 possible partners for each area (chain sampling easier, more knowledge about; check for publications),
  • Prepare a research plan (propose, ask for supportive information, clarify, negotiate),
  • Create a cross-partner groups of experts,
  • Prepare a communication strategy (communication channels, responsible individuals, timing),
  • Prepare a glossary covering all the important issues covered by the research project,
  • Monitor the project process and timing, identify concerns, troubles, causes of delays,
  • Prepare for the next steps in advance, inform project partners about the upcoming activities,
  • Summarize, show good practices, successful strategies (during project realization, to achieve better project performance).

Acknowledgments

The current study was part of the PATHWAYS project, that has received funding from the European Union’s Health Program (2014–2020) Grant agreement no. 663474.

The evaluation questionnaire developed for the PATHWAYS Project.

1. Structure and Resources
(Questions to Be Asked at the Beginning of the Project and 36th Month)
2. Process, Management and Communication
(Questions to Be Asked Every 3 Months; with Exception of Question 2.10)
3. Achievements/Outcomes
(Questions to Be Asked Every 3 Months after the First Year; with Exception of Questions 3.5, 3.6, and 3.7)
1.1. The number of partners (institutions) in the consortium is:
adequate/too many/too few
Comments/potential improvements
2.1. The coordination of the project in the past 3 months has been:
Very good/good/poor/very poor
Comments/potential improvements
3.1. The educational and public-awareness raising activities that have been carried out in my country were successful:
Yes, completely/only partly/not at all
Comments/potential improvements
List educational and public awareness-raising activities based on the project that have been undertaken (i.e., publications in press/in the internet, workshops, professional conferences, etc.)
1.2. In your opinion professional competences of the members of the consortium are:
adequate/satisfactory enough/not enough
Comments/potential improvements
2.2. The consensus around terms (shared glossary) in the project consortium seems to be:
Full/partial/poor
Comments/potential improvements
3.2. Have been the planned outcomes (milestones and deliverables) achieved (so far) according to:
(a) identification of strategies
Yes, completely/only partly/not at all
Comments/potential improvements
(b) determination of effectiveness of these strategies
Yes, completely/only partly/not at all
Comments/potential improvements
(c) assessment of employment-related needs
Yes, completely/only partly/not at all
Comments/potential improvements
(d) development of guidelines
Yes, completely/only partly/not at all
Comments/potential improvements
1.3. The roles of participants were defined:
Very clear/relatively clear/unclear
Comments/potential improvements
2.3. The process of implementation of research phases was:
Very good/good/poor/very poor
Comments/potential improvements
3.3. Have been relevant stakeholders participation achieved (so far)?
All of them/most of them/some of them/none of them
At the moment, how many contacts were successfully established against the planed (in percent)
Comments/potential improvements
1.4. The resources foreseen for this project were:
(a) human resources:
too much/just right/not sufficient
Comments/potential improvements
(b) financial resources:
too much/just right/not sufficient
Comments/potential improvements
(c) time resources:
too much/just right/not sufficient
Comments/potential improvements
2.4. How would you rate the communication with the coordinator of the project?
(a) in terms of timing?
Very good/good/poor/very poor
(b) in terms of quality of information?
Very good/good/poor/very poor
Comments/potential improvements
3.4. Is the dissemination process carried out according to the plan?
Completely/only partly/not at all
Comments/potential improvements
1.5. Is the plan of the communication in the project clear?
yes/no
Comments/potential improvements
2.5. How would you rate the communication with WP leaders?
(a) in terms of timing?
Very good/good/poor/very poor
(b) in terms of quality of information?
Very good/good/poor/very poor
Comments/potential improvements
3.5. The results of PATHWAYS project have been achieved (questions to be asked in 36th month):
Completely/only partly/not at all
Comments/potential improvement
(questions to be asked in 36th month)
1.6. The tasks in research phases were defined:
(a) in terms of clarity of what is expected to be done
Very well/well/poor/very poor
Comments/potential improvements
(b) in terms of division of work
Very well/well/poor/very poor
Comments/potential improvements
2.6. How would you rate the communication between partners?
(a) in terms of timing?
Very good/good/poor/very poor
(b) in terms of quality of information?
Very good/good/poor/very poor
Comments/potential improvements
3.6. Do project outcomes meet the needs of the target groups?
Yes, completely/only partly/not at all
Comments/potential improvements
(questions to be asked in 36th month)
1.7. The activities to be carried out by each partner were:
(a) defined:
Very good/good/poor/very poor
(b) implemented
Very good/good/poor/very poor
Comments/potential improvements
2.7. Has been the project carried out according to the plan?
Yes/rather yes/rather no/no
Could you give us approximate estimation (in percent) of the project plan realization (what has been done according to the plan)
Comments/potential improvements
3.7. Please, list scientific publications and conference presentations based on the project that have been published/delivered (only those, where member of your team is the first author/presenter) (questions to be asked in 36th month)
2.8. How would you rate the involvement of target groups
Very good/good/poor/very poor
Comments/potential improvements
2.9. How would you rate the usefulness of the developed materials (leaflets, information on the website etc)?
Very good/good/poor/very poor
Comments/potential improvements
2.10. The organization of project meetings was carried out:
Very good/good/poor/very poor
Comments/potential improvements
(questions to be asked after team meetings)
2.11. Have you experienced any difficulties in the process of project realization?
Yes/no
If yes, describe briefly

SWOT analysis:

What are strengths and weaknesses of the project? (list, please)

What are threats and opportunities? (list, please)

Visual SWOT:

Please, rate the project on the following continua:

How would you rate:

(no strengths) 0 1 2 3 4 5 6 7 8 9 10 (a lot of strengths, very strong)

(no weaknesses) 0 1 2 3 4 5 6 7 8 9 10 (a lot of weaknesses, very weak)

(no risks) 0 1 2 3 4 5 6 7 8 9 10 (several risks, inability to accomplish the task(s))

(no opportunities) 0 1 2 3 4 5 6 7 8 9 10 (project has a lot of opportunities)

Author Contributions

A.G., A.P., B.T.-A. and M.L. conceived and designed the concept; A.G., A.P., B.T.-A. finalized evaluation questionnaire and participated in data collection; A.G. analyzed the data; all authors contributed to writing the manuscript. All authors agreed on the content of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

  • About University of Sheffield
  • Campus life
  • Accommodation
  • Student support
  • Virtual events
  • International Foundation Year
  • Pre-Masters
  • Pre-courses
  • Entry requirements
  • Fees, accommodation and living costs
  • Scholarships
  • Semester dates
  • Student visa
  • Before you arrive
  • Enquire now

How to do a research project for your academic study

  • Link copied!

USIC student studying

Writing a research report is part of most university degrees, so it is essential you know what one is and how to write one. This guide on how to do a research project for your university degree shows you what to do at each stage, taking you from planning to finishing the project.

What is a research project? 

The big question is: what is a research project? A research project for students is an extended essay that presents a question or statement for analysis and evaluation. During a research project, you will present your own ideas and research on a subject alongside analysing existing knowledge. 

How to write a research report 

The next section covers the research project steps necessary to producing a research paper. 

Developing a research question or statement 

Research project topics will vary depending on the course you study. The best research project ideas develop from areas you already have an interest in and where you have existing knowledge. 

The area of study needs to be specific as it will be much easier to cover fully. If your topic is too broad, you are at risk of not having an in-depth project. You can, however, also make your topic too narrow and there will not be enough research to be done. To make sure you don’t run into either of these problems, it’s a great idea to create sub-topics and questions to ensure you are able to complete suitable research. 

A research project example question would be: How will modern technologies change the way of teaching in the future? 

Finding and evaluating sources 

Secondary research is a large part of your research project as it makes up the literature review section. It is essential to use credible sources as failing to do so may decrease the validity of your research project.

Examples of secondary research include:

  • Peer-reviewed journals
  • Scholarly articles
  • Newspapers 

Great places to find your sources are the University library and Google Scholar. Both will give you many opportunities to find the credible sources you need. However, you need to make sure you are evaluating whether they are fit for purpose before including them in your research project as you do not want to include out of date information. 

When evaluating sources, you need to ask yourself:

  • Is the information provided by an expert?
  • How well does the source answer the research question?
  • What does the source contribute to its field?
  • Is the source valid? e.g. does it contain bias and is the information up-to-date?

It is important to ensure that you have a variety of sources in order to avoid bias. A successful research paper will present more than one point of view and the best way to do this is to not rely too heavily on just one author or publication. 

Conducting research 

For a research project, you will need to conduct primary research. This is the original research you will gather to further develop your research project. The most common types of primary research are interviews and surveys as these allow for many and varied results. 

Examples of primary research include: 

  • Interviews and surveys 
  • Focus groups 
  • Experiments 
  • Research diaries 

If you are looking to study in the UK and have an interest in bettering your research skills, The University of Sheffield is a  world top 100 research university  which will provide great research opportunities and resources for your project. 

Research report format  

Now that you understand the basics of how to write a research project, you now need to look at what goes into each section. The research project format is just as important as the research itself. Without a clear structure you will not be able to present your findings concisely. 

A research paper is made up of seven sections: introduction, literature review, methodology, findings and results, discussion, conclusion, and references. You need to make sure you are including a list of correctly cited references to avoid accusations of plagiarism. 

Introduction 

The introduction is where you will present your hypothesis and provide context for why you are doing the project. Here you will include relevant background information, present your research aims and explain why the research is important. 

Literature review  

The literature review is where you will analyse and evaluate existing research within your subject area. This section is where your secondary research will be presented. A literature review is an integral part of your research project as it brings validity to your research aims. 

What to include when writing your literature review:

  • A description of the publications
  • A summary of the main points
  • An evaluation on the contribution to the area of study
  • Potential flaws and gaps in the research 

Methodology

The research paper methodology outlines the process of your data collection. This is where you will present your primary research. The aim of the methodology section is to answer two questions: 

  • Why did you select the research methods you used?
  • How do these methods contribute towards your research hypothesis? 

In this section you will not be writing about your findings, but the ways in which you are going to try and achieve them. You need to state whether your methodology will be qualitative, quantitative, or mixed. 

  • Qualitative – first hand observations such as interviews, focus groups, case studies and questionnaires. The data collected will generally be non-numerical. 
  • Quantitative – research that deals in numbers and logic. The data collected will focus on statistics and numerical patterns.
  • Mixed – includes both quantitative and qualitative research.

The methodology section should always be written in the past tense, even if you have already started your data collection. 

Findings and results 

In this section you will present the findings and results of your primary research. Here you will give a concise and factual summary of your findings using tables and graphs where appropriate. 

Discussion 

The discussion section is where you will talk about your findings in detail. Here you need to relate your results to your hypothesis, explaining what you found out and the significance of the research. 

It is a good idea to talk about any areas with disappointing or surprising results and address the limitations within the research project. This will balance your project and steer you away from bias.

Some questions to consider when writing your discussion: 

  • To what extent was the hypothesis supported?
  • Was your research method appropriate?
  • Was there unexpected data that affected your results?
  • To what extent was your research validated by other sources?

Conclusion 

The conclusion is where you will bring your research project to a close. In this section you will not only be restating your research aims and how you achieved them, but also discussing the wider significance of your research project. You will talk about the successes and failures of the project, and how you would approach further study. 

It is essential you do not bring any new ideas into your conclusion; this section is used only to summarise what you have already stated in the project. 

References 

As a research project is your own ideas blended with information and research from existing knowledge, you must include a list of correctly cited references. Creating a list of references will allow the reader to easily evaluate the quality of your secondary research whilst also saving you from potential plagiarism accusations. 

The way in which you cite your sources will vary depending on the university standard.

If you are an international student looking to  study a degree in the UK , The University of Sheffield International College has a range of  pathway programmes  to prepare you for university study. Undertaking a Research Project is one of the core modules for the  Pre-Masters programme  at The University of Sheffield International College.

Frequently Asked Questions 

What is the best topic for research .

It’s a good idea to choose a topic you have existing knowledge on, or one that you are interested in. This will make the research process easier; as you have an idea of where and what to look for in your sources, as well as more enjoyable as it’s a topic you want to know more about.

What should a research project include? 

There are seven main sections to a research project, these are:

  • Introduction – the aims of the project and what you hope to achieve
  • Literature review – evaluating and reviewing existing knowledge on the topic
  • Methodology – the methods you will use for your primary research
  • Findings and results – presenting the data from your primary research
  • Discussion – summarising and analysing your research and what you have found out
  • Conclusion – how the project went (successes and failures), areas for future study
  • List of references – correctly cited sources that have been used throughout the project. 

How long is a research project? 

The length of a research project will depend on the level study and the nature of the subject. There is no one length for research papers, however the average dissertation style essay can be anywhere from 4,000 to 15,000+ words. 

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Research Project Evaluation-Learnings from the PATHWAYS Project Experience

Affiliations.

  • 1 Epidemiology and Preventive Medicine, Jagiellonian University Medical College, 31-034 Krakow, Poland. [email protected].
  • 2 Epidemiology and Preventive Medicine, Jagiellonian University Medical College, 31-034 Krakow, Poland. [email protected].
  • 3 Fondazione IRCCS, Neurological Institute Carlo Besta, 20-133 Milano, Italy. [email protected].
  • 4 Epidemiology and Preventive Medicine, Jagiellonian University Medical College, 31-034 Krakow, Poland. [email protected].
  • PMID: 29799452
  • PMCID: PMC6025380
  • DOI: 10.3390/ijerph15061071

Background: Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). The PATHWAYS project involved multiple countries and multi-cultural aspects of re/integrating chronically ill patients into labor markets in different countries. This paper describes key project's evaluation issues including: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits and presents the advantages of a continuous monitoring.

Methods: Project evaluation tool to assess structure and resources, process, management and communication, achievements, and outcomes. The project used a mixed evaluation approach and included Strengths (S), Weaknesses (W), Opportunities (O), and Threats (SWOT) analysis.

Results: A methodology for longitudinal EU projects' evaluation is described. The evaluation process allowed to highlight strengths and weaknesses and highlighted good coordination and communication between project partners as well as some key issues such as: the need for a shared glossary covering areas investigated by the project, problematic issues related to the involvement of stakeholders from outside the project, and issues with timing. Numerical SWOT analysis showed improvement in project performance over time. The proportion of participating project partners in the evaluation varied from 100% to 83.3%.

Conclusions: There is a need for the implementation of a structured evaluation process in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every multidisciplinary research projects.

Keywords: SWOT analysis; internal evaluation; project achievements; project management and monitoring; project process evaluation; public health.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Partners’ opinions about the consensus…

Partners’ opinions about the consensus around terms (shared glossary) in the project consortium…

Partners’ reports on whether the…

Partners’ reports on whether the project had been carried out according to the…

Partners’ reports on an approximate…

Partners’ reports on an approximate estimation (in percent) of the project plan implementation…

SWOT Analysis—a summary of main…

SWOT Analysis—a summary of main issues reported by PATHWAYS project partners.

Numerical SWOT, combined, over a…

Numerical SWOT, combined, over a period of 36 months of project realization (W1—after…

Similar articles

  • Strengths, Weaknesses, Opportunities, and Threats (SWOT) Analysis of Hemodialysis Electronic Health Record Implementation. Bennis B, El Bardai G, Chouhani BA, Kabbali N, Sqalli T. Bennis B, et al. Cureus. 2024 Feb 22;16(2):e54675. doi: 10.7759/cureus.54675. eCollection 2024 Feb. Cureus. 2024. PMID: 38523930 Free PMC article.
  • Six Sigma: not for the faint of heart. Benedetto AR. Benedetto AR. Radiol Manage. 2003 Mar-Apr;25(2):40-53. Radiol Manage. 2003. PMID: 12800564
  • Community health outreach program of the Chad-Cameroon petroleum development and pipeline project. Utzinger J, Wyss K, Moto DD, Tanner M, Singer BH. Utzinger J, et al. Clin Occup Environ Med. 2004 Feb;4(1):9-26. doi: 10.1016/j.coem.2003.09.004. Clin Occup Environ Med. 2004. PMID: 15043361 Review.
  • A research roadmap for complementary and alternative medicine - what we need to know by 2020. Fischer F, Lewith G, Witt CM, Linde K, von Ammon K, Cardini F, Falkenberg T, Fønnebø V, Johannessen H, Reiter B, Uehleke B, Weidenhammer W, Brinkhaus B. Fischer F, et al. Forsch Komplementmed. 2014;21(2):e1-16. doi: 10.1159/000360744. Epub 2014 Mar 24. Forsch Komplementmed. 2014. PMID: 24851850
  • Risk management frameworks for human health and environmental risks. Jardine C, Hrudey S, Shortreed J, Craig L, Krewski D, Furgal C, McColl S. Jardine C, et al. J Toxicol Environ Health B Crit Rev. 2003 Nov-Dec;6(6):569-720. doi: 10.1080/10937400390208608. J Toxicol Environ Health B Crit Rev. 2003. PMID: 14698953 Review.
  • Employment and People with Non Communicable Chronic Diseases: PATHWAYS Recommendations and Suggested Actions for Implementing an Inclusive Labour Market for All and Health in All Sectors. Leonardi M, Scaratti C. Leonardi M, et al. Int J Environ Res Public Health. 2018 Aug 7;15(8):1674. doi: 10.3390/ijerph15081674. Int J Environ Res Public Health. 2018. PMID: 30087249 Free PMC article. No abstract available.
  • Butler L. Modifying publication practices in response to funding formulas. Res. Eval. 2003;12:39–46. doi: 10.3152/147154403781776780. - DOI
  • Geuna A., Martin B.R. University Research Evaluation and Funding: An International Comparison. Minerva. 2003;41:277–304. doi: 10.1023/B: MINE.0000005155.70870.bd . - DOI
  • Arnold E. Evaluating research and innovation policy: A systems world needs systems evaluations. Res. Eval. 2004;13:3–17. doi: 10.3152/147154404781776509. - DOI
  • Porter A.L., Roessner J.D., Cohen A.S., Perreault M. Interdisciplinary research: Meaning, metrics and nurture. Res. Eval. 2006;15:187–196. doi: 10.3152/147154406781775841. - DOI
  • Barker K. The UK Research Assessment Exercise: The evolution of a national research evaluation system. Res. Eval. 2007;16:3–12. doi: 10.3152/095820207X190674. - DOI

Publication types

  • Search in MeSH

Related information

Linkout - more resources, full text sources.

  • Europe PubMed Central
  • PubMed Central

Other Literature Sources

  • scite Smart Citations

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

IMAGES

  1. Stage 2 Research Project B Evaluation

    research project b evaluation

  2. Research Project B

    research project b evaluation

  3. Overview of Project Evaluation

    research project b evaluation

  4. Sample Project Evaluation

    research project b evaluation

  5. Research Project B Evaluation

    research project b evaluation

  6. Research Project: Evaluation

    research project b evaluation

VIDEO

  1. Трейлер Project B [BETA] Полная версия!

  2. I am thinking about to change the story

  3. Project B

  4. Commission project

  5. Commission project

  6. How to Make Research Project B ed 8613 Free

COMMENTS

  1. Evaluation

    Evaluation - Research Project B. The following exemplars include graded student work. Documents will continue to be uploaded as they become available. RPB A+ Evaluation: Empathy [DOC 79KB] RPB A Evaluation: Fairytales [PDF 1MB] RPB B Evaluation: Hairy-nosed Wombat [DOC 57KB]

  2. PDF Research Project Workbook & Guide

    H PROJECT BProject you select.RESEARCH PROJECT A OR RESEARCH PROJECT B Research Project B. Come back to the cover and tick or colour in the box next t. The research framework for Research Project has 4 parts: 1 Initiating and planning the research. 2 Conducting the research. 3 Producing the research outcome.

  3. PDF Research Project overview SECTION 1

    6. review their research (Research Project A)/evaluate their research (Research Project B) Source: Stage 2 Research Project A and B 2015 Subject Outlines. By e +ecvely following the Research Framework, you should address all of the learning requirements. The secons of this workbook will guide you through this framework and will assist you in ...

  4. Exemplars

    Urrbrae AHS Research Project B. Home About Folio Planning Development Outcome Evaluation (RPB) More Information Evaluation Examplars. Look at the SACE Board ... rpb_evaluation_a_grade.pdf: File Size: 202 kb: File Type: pdf: Download File. a_plus_evaluation_example.docx: File Size: 20 kb: File Type: docx:

  5. How Can Research Be Evaluated?

    To be effective, the design of the framework should depend on the purpose of the evaluation: advocacy, accountability, analysis and/or allocation. Research evaluation tools typically fall into one of two groups, which serve different needs; multiple methods are required if researchers' needs span both groups.

  6. Support materials

    Evaluation - Research Project B Student exemplars. RPB A+ Evaluation: Empathy [DOC 79KB] RPB A Evaluation: Fairytales [PDF 1MB] RPB B Evaluation: Hairy-nosed Wombat [DOC 57KB] RPB C+ Evaluation: A car and its owner [PDF 1.3MB] RPB C Evaluation: Defending a property from bushfire [DOC 78KB] RPB D+ Evaluation: Roller coaster design [DOC 49KB]

  7. Research Project B Evaluation

    Studying from past student work is an amazing way to learn and research, however you must always act with academic integrity. This document is the prior work of another student. Thinkswap has partnered with Turnitin to ensure students cannot copy directly from our resources. Understand how to responsibly use this work by visiting 'Using ...

  8. Evaluating Research

    Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and ...

  9. Stage 2 Research Project B Performance Standards

    relevant to the research outcome. S3 Clear and coherent expression of ideas. E1 Insightful evaluation of the research processes used, specific to the research question. E2 Critical evaluation of decisions made in response to challenges and/or opportunities. E3 Insightful evaluation of the quality of the research outcome B P1 Consideration and some

  10. Choose an Evaluation Design

    Choose an Appropriate Evaluation Design. Once you've identified your questions, you can select an appropriate evaluation design. Evaluation design refers to the overall approach to gathering information or data to answer specific research questions. There is a spectrum of research design options—ranging from small-scale feasibility studies ...

  11. Evaluating research projects

    An intermediate evaluation is aimed basically at helping to decide to go on, or to reorient the course of the research. Such objectives are examined in detail below, in the pages on evaluation of research projects ex ante and on evaluation of projects ex post. A final section deals briefly with intermediate evaluation. Importance of project ...

  12. PDF Performance Standards for Stage 2 Research Project B

    E1 Insightful evaluation of the research processes used, specific to the research question. E2 Critical evaluation of decisions made in response to challenges and/or opportunities specific to the research processes used. E3 Insightful evaluation of the quality of the Research Outcome. B P1 Consideration and some refinement of a research question.

  13. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  14. Measuring research: A guide to research evaluation frameworks and tools

    RAND monographs were products of RAND from 2003 to 2011 that presented major research findings that addressed the challenges facing the public and private sectors. All RAND monographs were subjected to rigorous peer review to ensure high standards for research quality and objectivity. This document and trademark (s) contained herein are ...

  15. Research Project Evaluation—Learnings from the PATHWAYS Project

    Background: Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). The PATHWAYS project involved multiple countries and ...

  16. General FAQs

    reviewing the research (Research Project A) or evaluating the research (Research Project B). What is the Review or Evaluation? This is the task for external assessment. Your Review or Evaluation is worth 30%of your final subject grade. If you are studying Research Project A, you will be undertaking a Review. For this external assessment task ...

  17. How to do a research project for your academic study

    Methodology - the methods you will use for your primary research. Findings and results - presenting the data from your primary research. Discussion - summarising and analysing your research and what you have found out. Conclusion - how the project went (successes and failures), areas for future study.

  18. Finding your way: the difference between research and evaluation

    A broadly accepted way of thinking about how evaluation and research are different comes from Michael Scriven, an evaluation expert and professor. He defines evaluation this way in his Evaluation Thesaurus: "Evaluation determines the merit, worth, or value of things.". He goes on to explain that "Social science research, by contrast, does ...

  19. Overview

    In the Research Project, you will have the opportunity to study an area of interest in depth. It will require you to use your creativity and initiative, while developing the research and presentation skills you will need in further study or work. Videos you watch may be added to the TV's watch history and influence TV recommendations.

  20. Research Project B ‐ Unpacking the Specific Features

    D1 Development of the research. D2 Analysis of information and exploration of ideas to develop the research. D3 Development of knowledge and skills specific to the research question. D4 Understanding and development of one or more capabilities. S1 Synthesis of knowledge, skills, and ideas to produce a resolution to the research question.

  21. Research Project Evaluation-Learnings from the PATHWAYS Project

    Background: Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). ). The PATHWAYS project involved multiple countries ...

  22. Understanding project evaluation

    The project evaluation literature has mainly concentrated on the objective aspects of project evaluation and overlooked the subjective aspects that reflect the temporal, dynamic, complex and subjective nature of today's projects. The authors propose a meta-framework that helps project practitioners to select an appropriate project evaluation ...

  23. Page 1 of 6 Stage 2 Research Project B student response Ref: A448143

    E1 Insightful evaluation of the research processes used, specific to the research question. E2 Critical evaluation of decisions made in response to challenges and/or opportunities specific to the research processes used. E3 Insightful evaluation of the quality of the research outcome . B . P1 Consideration and some refinement of a research ...