Selasa, 26 Juni 2018

Sponsored Links

Difference Between Analysis and Evaluation - YouTube
src: i.ytimg.com

Evaluation is the systematic determination of the reward, value, and significance of the subject, using criteria set by a set of standards. It may assist organizations, programs, projects or other interventions or initiatives to assess any objectives, realistic concepts/proposals, or alternatives, to assist in decision-making; or to ascertain the level of achievement or value with respect to the objectives and objectives and outcomes of each completed action. The main objective of the evaluation, in addition to gaining insight into previous or existing initiatives, is to enable reflection and assist in the identification of future changes.

Evaluations are often used to characterize and assess the subjects of interest in various human companies, including arts, criminal justice, foundations, non-profit organizations, government, health care, and other humanitarian services. This is long term and done at the end of the time period.


Video Evaluation



Definisi

Evaluation is a structured interpretation and giving meaning to the approximate or actual impact of the proposal or outcome. It looks at the original goal, and what is predicted or what has been achieved and how it is accomplished. So evaluations can be formative , which take place during the development of concepts or proposals, projects or organizations, with the intent of increasing the value or effectiveness of proposals, projects, or organizations. It can also be sumative , drawing a lesson from a completed action or project or organization at that point in time or circumstance.

Evaluation is inherently a theoretically informed approach (either explicitly or not), and consequently each definition of a particular evaluation will be tailored to its context - the theory, needs, objectives, and methodology of the evaluation process itself. Having said this, the evaluation has been defined as:

  • Implementation of a systematic, thorough and meticulous scientific methodology to assess the design, implementation, improvement or outcome of a program. This is an resource-intensive process, often requiring resources, such as, evaluating considerable skills, manpower, time, and budget
  • "A critical, as objective judgment as possible, of the extent to which a service or component part fulfills the stated purpose" (St. Leger and Wordsworth-Bell). The focus of this definition is to achieve objective knowledge, and to scientifically or quantitatively measure predetermined and external concepts.
  • "A study designed to help some audiences assess the merits and value of objects" (Stufflebeam). In this definition the focus is on the facts as well as the value-laden assessment of the results and the value of the program.

Destination

The main purpose of program evaluation can be "to determine the quality of the program by formulating judgments" Marthe Hurteau, Sylvain Houle, StÃÆ'Â © phanie Mongiat (2009).

An alternative view is that "projects, evaluators, and other stakeholders (including donors) will all have potentially different ideas on how best to evaluate projects because each may have different 'achievement' definitions. it is about defining what is valuable. "From this perspective, evaluation" is a contested term, "as" evaluator "uses a term evaluation to describe assessment, or program investigation while others only understand evaluation as synonymous with applied research.

There are two functions that take into account the evaluation objectives. Formative Evaluation provides information about product or process improvement. Summative evaluation provides information on short-term effectiveness or long-term impact to decide on the adoption of a product or process.

Not all evaluations serve the same purpose, some evaluations serve the monitoring function rather than simply focusing on measurable program outcomes or evaluation findings and a full list of evaluation types will be difficult to compile. This is because evaluation is not part of a unified theoretical framework, drawing on a number of disciplines, including management and organizational theory, policy analysis, education, sociology, social anthropology, and social change.

Discussions

However, strict adherence to a set of methodological assumptions may make the field of evaluation more acceptable to mainstream audiences but this adherence will work to prevent evaluators from developing new strategies to address the various problems facing the program.

It is claimed that only a small part of the evaluation report is used by evaluand (client) (Datta, 2006). One justification of this is that "when evaluation findings are challenged or use has failed, it is because stakeholders and clients find weak conclusions or inconclusive warrants" (Fournier and Smith, 1993). Some of the reasons for this situation may be the failure of the evaluator to set a set of goals together with the evaluand, or create goals that are too ambitious, and fail to compromise and incorporate individual cultural and program cultures in the objectives and evaluation process..

None of these problems is due to the lack of an evaluation definition but rather because the evaluator is trying to impose the idea of ​​inclining and defining evaluation on the client. The main reason for the use of this poor evaluation is arguably due to the lack of an evaluation adjustment to meet the client's needs, because the established idea (or definition) of what the evaluation is rather than what the client needs is (House, 1980).

The development of a standard methodology for evaluation will require arriving on the applicable way to inquire and state the results of ethical questions such as principal-agent, privacy, stakeholder definition, limited liability; and issues that can be spent wisely.

Maps Evaluation



Standard

Depending on the topic of interest, there is a group of professionals who review the quality and thoroughness of the evaluation process.

Evaluating programs and projects, on their values ​​and impacts in the context they are implemented, can be ethically challenged. Evaluators may face complex and culturally specific systems that are resistant to external evaluations. Furthermore, the project organization or other stakeholders may be invested in certain evaluation results. Finally, the evaluator himself may experience a "conflict of interest (COI)" issue, or suffer a breakdown or pressure to present findings that support a particular assessment.

Common professional conduct codes, as determined by the employing organization, typically cover three broad aspects of behavioral standards, and include intercollegal relationships (such as respect for differences and privacy), operational issues (due to the competence, accuracy of the documentation and the proper use of resources ), and conflicts of interest (nepotism, accepting gifts and other types of favoritism). However, specific guidelines specific to the role of evaluators that can be utilized in managing unique ethical challenges are required. The Joint Committee on Standards for Educational Evaluation has developed standards for program, personnel, and student evaluation. Joint Committee Standards are broken down into four parts: Utilities, Feasibility, Luxury, and Accuracy. Various European institutions have also set up their own standards, more or less related to those produced by the Joint Committee. They provide guidance on basing value judgments on systematic inquiry, evaluator competence and integrity, respect for people, and attention to public and public welfare.

The American Evaluation Association has created a set of Guiding Principles for evaluators. The order of these principles does not imply priority among them; the priorities will vary based on the situation and the role of the evaluator. Principles are executed as follows:

  • Systematic Inquiry: The evaluator conducts a systematic and data-based investigation of whatever is being evaluated. This requires the collection of quality data, including the choice of sustainable indicators, which gives credibility to the findings. Findings can be trusted when proven by evidence, reliable and valid. It also deals with the choice of methodology used, in such a way as to be consistent with the evaluation objectives and provide reliable data. In addition, the usefulness of the findings is so important that the information obtained by the evaluation is comprehensive and timely, and thus serves to provide maximum benefit and use for the stakeholders.
  • Competency: the evaluator delivers competent performance to the stakeholders. This requires that the evaluation team consist of a combination of appropriate competencies, so that varied and appropriate skills are available for the evaluation process, and that evaluators work within the scope of their capabilities.
  • Integrity/Honesty: the evaluator ensures the honesty and integrity of the entire evaluation process. A key element of this principle is the freedom from bias in evaluation and this is underlined by three principles: impartiality, independence, and transparency.

Independence is achieved through ensuring the independence of the assessment is upheld so that the conclusions of evaluation are not influenced or compromised by others, and avoiding conflicts of interest, so the evaluator has no share in any particular conclusions. Conflicts of interest are being debated especially when funding evaluations are provided by certain agencies with a share in the evaluation conclusions, and this is seen as potentially at the expense of the independence of the evaluator. While it is acknowledged that evaluators may be familiar with the institutions or projects they need to evaluate, independence requires that they are not involved in project planning or implementation. Declaration of interest should be made when there is a benefit or relationship to the stated project. The independence of the assessment is necessary to be maintained against the pressure brought on to bear on the evaluator, for example, by project funders who wish to modify the evaluation so that the project looks more effective than Findings can verify.

Impartiality refers to findings as a fair and comprehensive assessment of the strengths and weaknesses of projects or programs. This requires reasonable input from all involved stakeholders and the findings presented without bias and with a transparent, proportional, and persuasive relationship between findings and recommendations. So evaluators are required to limit their findings to evidence. The mechanisms for ensuring impartiality are external and internal reviews. The review is required from a significant evaluation (determined in terms of cost or sensitivity). This review is based on the quality of work and the extent to which provable links are provided among the findings and recommendations.

Transparency requires that stakeholders be aware of the reasons for the evaluation, the criteria at which evaluation occurs and the purpose for which the findings will be applied. Access to evaluation documents should be facilitated through easy-to-read findings, with a clear explanation of evaluation methodologies, approaches, information sources, and costs happen.

  • Awards to People: Assessors respect the safety, dignity and self-esteem of respondents, program participants, clients, and other stakeholders who interact with it. This is particularly relevant to those who will be affected. by evaluation results. Protection of persons includes ensuring approval based on information from those involved in evaluating, upholding confidentiality, and ensuring that their identities that may provide sensitive information to the evaluation of the program are protected. Assessors are ethically required to respect the habits and beliefs of those affected by the evaluation or program activities. An example of how such respect is shown is by respecting local customs such as dress codes, respecting people's privacy, and minimizing demands on someone else's time. If stakeholders want to object to evaluation findings, such a process should be facilitated through the local office of the evaluation organization, and procedures for filing a complaint or question should be accessible and clear.
  • Responsibility for Public and Public Welfare: The assessors articulate and consider the diversity of interests and values ​​that may be related to the general and public welfare. Access to the wider public evaluation document should be facilitated so that discussion and feedback are activated.

Furthermore, international organizations such as I.M.F. and the World Bank has an independent evaluation function. UN funds, programs and agencies have a mix of independent, semi-independent and self-evaluation functions, which have organized themselves as the UN Evaluation Group (UNEG) throughout the system, working together to strengthen the function. , and establish the United Nations norms and standards for evaluation. There is also an evaluation group within the OECD-DAC, which seeks to improve development evaluation standards. The independent evaluation unit of the major multinational development bank (MDB) has also established an Evaluation Working Group to strengthen the use of the evaluation for greater MDB effectiveness and accountability, share lessons learned from MDB evaluation, and promote harmonization and collaborative evaluation.

Evaluation Form stock photo. Image of flyer, assessment - 17977290
src: thumbs.dreamstime.com


Perspective

The word "evaluation" has various connotations for different people, raising issues related to this process which include; what kind of evaluation should be done; why should there be an evaluation process and how is the evaluation integrated into the program, for the purpose of gaining greater knowledge and awareness?

There are also various factors inherent in the evaluation process, for example; to critically examine influences in programs involving the collection and analysis of relative information about a program. Michael Quinn Patton motivates the concept that evaluation procedures should be directed to:

  • Activity
  • Characteristics
  • Results
  • Assessment of the program
  • Increases its effectiveness,
  • Informed programming decisions

Established in another perspective of evaluation by Thomson and Hoffman in 2003, it is possible for the situation to be faced, where the process can not be considered advisable; for example, if a program is unpredictable, or unhealthy. This will include a lack of consistent routines; or related parties can not reach agreement on the objectives of the program. In addition, the influencer, or manager, refuses to include important issues that are relevant and important in the evaluation

small>LFA's Evaluation Services in Action</small> â€
src: static1.squarespace.com


Approach

There are different ways of conceptual thinking about, designing, and conducting evaluation efforts. Many of the evaluation approaches used today make a truly unique contribution to solving important problems, while others refine existing approaches in several ways.

Classification approach

Two classifications of the evaluation approach by House and Stufflebeam and Webster can be combined into a manageable set of approaches based on their unique and important basic principles.

House considers all major evaluation approaches based on a general ideology entitled liberal democracy. The essential principles of this ideology include freedom of choice, the uniqueness of individual and empirical investigations based on objectivity. He also argues that everything is based on subjective ethics, where ethical behavior is based on the subjective or intuitive experience of an individual or group. One form of the ethics of subjectivism is utilitarian, in which "good" is determined by what maximizes a single and explicit interpretation of happiness for society as a whole. Another form of ethical subjectivism is intuitionist/pluralist, in which no single interpretation of the "good" is assumed and the interpretation need not be explicitly stated or justified.

This ethical position has an appropriate epistemology - the philosophy of acquiring knowledge. The objectivist epistemology is associated with utilitarian ethics; in general, it is used to acquire externally verifiable knowledge (intersubjective agreements) through publicly exposed methods and data. The epistemology of subjectivism is associated with intuitive/pluralist ethics and is used to acquire new knowledge based on existing personal knowledge, as well as experience (explicitly) or not (tacit) available for public inspection. The house then divides each epistemological approach into two major political perspectives. First, the approach can take an elite perspective, focusing on the interests of managers and professionals; or they can also take a mass perspective, focus on the consumer and participatory approach.

Stufflebeam and Webster put the approach to one of three groups, according to their orientation to the role of values ​​and ethical considerations. The political orientation promotes a positive or negative view of an object regardless of what its true and possible value is - they call it a false evaluation. The question orientation includes approaches that may or may not provide specific answers related to the value of an object - they call it quasi-evaluation. The value orientation includes an approach that is primarily intended to determine the value of an object - they call it the correct evaluation.

When the above concepts are considered simultaneously, fifteen evaluation approaches can be identified in terms of epistemology, principal perspective (of the House), and orientation. Two pseudo-evaluation approaches, the study of political relations and public relations, are represented. They are based on an objectivist epistemology from an elite perspective. Six quasi-evaluation approaches use objectivist epistemology. Five of them - experimental research, information management systems, test programs, goal-based study, and content analysis - take an elite perspective. Accountability takes a mass perspective. Seven correct evaluation approaches are also included. Two approaches, decision-oriented and policy-oriented studies, are based on objectivist epistemology from an elite perspective. Consumer-oriented studies are based on objectivist epistemology from a mass perspective. Two approaches - accreditation/certification and audience studies - are based on subjectivist epistemology from an elite perspective. Finally, client-centered and centered studies are based on subjectivist epistemology from a mass perspective.

Summary of approach

The following table is used to summarize each approach in terms of the four attributes - organizer, objectives, strengths, and weaknesses. Organizers represent the main considerations or guidelines practitioners use to organize a study. The goal represents the desired outcome for a study at a very general level. Strengths and weaknesses represent other attributes that must be considered when deciding whether to use the approach for a particular study. The following narrative highlights the differences between the approaches that are grouped together.

The study of political relations and public relations is based on objectivist epistemology from an elite perspective. Although both approaches attempt to misinterpret the value interpretation of an object, they function differently from one another. Information obtained through politically controlled research is released or retained to meet the special interests of the holder, while public relations information creates a positive image of an object regardless of the actual situation. Although the application of the two studies in a real scenario, none of these approaches is an acceptable evaluation practice.

Methodological evaluation is diverse. Methods may be qualitative or quantitative, and include case studies, survey research, statistical analysis, modeling, and more such as:

Marketing Evaluation / Audit by the experts
src: assistmarketing.co.nz


See also

  • Monitoring and Evaluation is a process used by governments, international organizations and NGOs to assess ongoing or past activities
  • Assessment is the process of collecting and analyzing specific information as part of an evaluation
  • Competency evaluation is a means for teachers to determine their students' abilities in other ways than standard tests
  • Educational evaluation is an evaluation done specifically in educational settings
  • Strong evaluation, opposed by Gilles Deleuze for judging appraisals
  • Performance evaluation is a term from the language testing field. This goes against the competency evaluation
  • Program evaluation is basically a set of philosophies and techniques for determining whether a program 'works'
  • Donald Kirkpatrick's Evaluation Model for training evaluation

Welcome to the European Evaluation Helpdesk for Rural Development ...
src: enrd.ec.europa.eu


References


Difference Between Assessment and Evaluation - YouTube
src: i.ytimg.com


External links

  • Link to Evaluation and Evaluation Resources - A list of links to resources on some topics
  • Glossary
  • Link Evaluation Link Links Feed links evaluation portal with information about evaluation, dissemination, project, community, instructional, book, and more
  • Free Resources for Methods in Social Evaluation and Research
  • Introduction and Discussion on Monitoring & amp; Program Evaluation Development & amp; Projects

Source of the article : Wikipedia

Comments
0 Comments