Conclusions drawn from analyzing survey data are only acceptable to the degree to which they are determined valid. Validity is used to determine whether research measures what it intended to measure and to approximate the truthfulness of the results. Researchers often use their own definition when it comes to what is considered valid. In quantitative research testing for validity and reliability is a given. However some qualitative researchers have gone so far as to suggest that validity does not apply to their research even as they acknowledge the need for some qualifying checks or measures in their work. This is wrong. To disregard validity is to put the trustworthiness of your work in question and to call into question others confidence in its results. Even when qualitative measures are used in research they need to be looked at using measures of reliability and validity in order to sustain the trustworthiness of the results. Validity and reliability make the difference between “good” and “bad” research reports. Quality research depends on a commitment to testing and increasing the validity as well as the reliability of your research results.
Any research worth its weight is concerned with whether what is being measured is what is intended to be measured and considers the ways in which observations are influenced by the circumstances in which they are made. The basis of how our conclusions are made play an important role in addressing the broader substantive issues of any given study. For this reason we are going to look at various validity types that have been formulated as a part of legitimate research methodology.
Face Validity
This is the least scientific method of validity as it is not quantified using statistical methods. This is not validity in a technical sense of the term. It is concerned with whether it seems like we measure what we claim. Here we look at how valid a measure appears on the surface and make subjective judgments based off of that. For example, if you give a survey that appears to be valid to the respondent and the questions are selected because they look valid to the administer. The administer asks a group of random people, untrained observers, if the questions appear valid to them. In research its never sufficient to rely on face judgments alone and more quantifiable methods of validity are necessary in order to draw acceptable conclusions. There are many instruments of measurement to consider so face validity is useful in cases where you need to distinguish one approach over another. Face validity should never be trusted on its own merits.
Content Validity
This is also a subjective measure but unlike face validity we ask whether the content of a measure covers the full domain of the content. If a researcher wanted to measure introversion they would have to first decide what constitutes a relevant domain of content for that trait. This is considered a subjective form of measurement because it still relies on people’s perception for measuring constructs that would otherwise be difficult to measure. Where it distinguishes itself is through its use of experts in the field or individuals belonging to a target population. This study can be made more objective through the use of rigorous statistical tests. For example you could have a content validity study that informs researchers how items used in a survey represent their content domain, how clear they are, and the extent to which they maintain the theoretical factor structure assessed by the factor analysis.
Construct Validity
A construct represents a collection of behaviors that are associated in a meaningful way to create an image or an idea invented for a research purpose. Depression is a construct that represents a personality trait which manifests itself in behaviors such as over sleeping, loss of appetite, difficulty concentrating, etc. The existence of a construct is manifest by observing the collection of related indicators. Any one sign may be associated with several constructs. A person with difficulty concentrating may have A.D.D. but not depression. Construct validity is the degree to which inferences can be made from operationalizations(connecting concepts to observations) in your study to the constructs on which those operationalizations are based. To establish construct validity you must first provide evidence that your data supports the theoretical structure. You must also show that you control the operationalization of the construct, in other words, show that your theory has some correspondence with reality.
- Convergent Validity - the degree to which an operation is similar to other operations it should theoretically be similar to.
- Discriminative Validity - if a scale adequately differentiates itself or does not differentiate between groups that should differ or not differ based on theoretical reasons or previous research.
- Nomological Network - representation of the constructs of interest in a study, their observable manifestations, and the interrelationships among and between these. According to Cronbach and Meehl, a nomological network has to be developed for a measure in order for it to have construct validity
- Multitrait-Multimethod Matrix - six major considerations when examining Construct Validity according to Campbell and Fiske. This includes evaluations of the convergent validity and discriminative validity. The others are trait method unit, multi-method/trait, truley different methodology, and trait characteristics.
This refers to the extent to which the independent variable can accurately be stated to produce the observed effect. If the effect of the dependent variable is only due to the independent variable(s) then internal validity is achieved. This is the degree to which a result can be manipulated.
Statistical Conclusion Validity
A determination of whether a relationship or co-variation exists between cause and effect variables. Requires ensuring adequate sampling procedures, appropriate statistical tests, and reliable measurement procedures. This is the degree to which a conclusion is credible or believable.
External Validity
This refers to the extent to which the results of a study can be generalized beyond the sample. Which is to say that you can apply your findings to other people and settings. Think of this as the degree to which a result can be generalized.
Criterion-Related Validity
Can alternately be referred to as Instrumental Validity. The accuracy of a measure is demonstrated by comparing it with a measure that has been demonstrated to be valid. In other words, correlations with other measures that have known validity. For this to work you must know that the criterion has been measured well. And be aware that appropriate criteria do not always exist. What you are doing is checking the performance of your operationalization against a criteria. The criteria you use as a standard of judgment accounts for the different approaches you would use:
- Predictive Validity - operationalization’s ability to predict what it is theoretically able to predict. The extent to which a measure predicts expected outcomes.
- Concurrent Validity - operationalization’s ability to distinguish between groups it theoretically should be able to. This is where a test correlates well with a measure that has been previously validated.
No comments:
Post a Comment