Assalam-o-Allaekum

I'm very pleased to well come you to the Education forum of Pakistan. Hope your visit will be useful and you will get your required assistance.
regards
Sadaf Awan

Wisdom Thought

The one who likes to see the dreams, night is short for them and who One who likes to fulfill the dreams, day is short for them.

Friday, October 21, 2011

Reliability & Validity

We often think of reliability and validity as separate ideas but, in fact, they're related to each other. Here, I want to show you two ways you can think about their relationship.
One of my favorite metaphors for the relationship between reliability is that of the target. Think of the center of the target as the concept that you are trying to measure. Imagine that for each person you are measuring, you are taking a shot at the target. If you measure the concept perfectly for a person, you are hitting the center of the target. If you don't, you are missing the center. The more you are off for that person, the further you are from the center.



The figure above shows four possible situations. In the first one, you are hitting the target consistently, but you are missing the center of the target. That is, you are consistently and systematically measuring the wrong value for all respondents. This measure is reliable, but no valid (that is, it's consistent but wrong). The second, shows hits that are randomly spread across the target. You seldom hit the center of the target but, on average, you are getting the right answer for the group (but not very well for individuals). In this case, you get a valid group estimate, but you are inconsistent. Here, you can clearly see that reliability is directly related to the variability of your measure. The third scenario shows a case where your hits are spread across the target and you are consistently missing the center. Your measure in this case is neither reliable nor valid. Finally, we see the "Robin Hood" scenario -- you consistently hit the center of the target. Your measure is both reliable and valid (I bet you never thought of Robin Hood in those terms before).
Another way we can think about the relationship between reliability and validity is shown in the figure below. Here, we set up a 2x2 table. The columns of the table indicate whether you are trying to measure the same or different concepts. The rows show whether you are using the same or different methods of measurement. Imagine that we have two concepts we would like to measure, student verbal and math ability. Furthermore, imagine that we can measure each of these in two ways. First, we can use a written, paper-and-pencil exam (very much like the SAT or GRE exams). Second, we can ask the student's classroom teacher to give us a rating of the student's ability based on their own classroom observation.

 
The first cell on the upper left shows the comparison of the verbal written test score with the verbal written test score. But how can we compare the same measure with itself? We could do this by estimating the reliability of the written test through a test-retest correlation, parallel forms, or an internal consistency measure. What we are estimating in this cell is the reliability of the measure.
The cell on the lower left shows a comparison of the verbal written measure with the verbal teacher observation rating. Because we are trying to measure the same concept, we are looking at convergent validity.
The cell on the upper right shows the comparison of the verbal written exam with the math written exam. Here, we are comparing two different concepts (verbal versus math) and so we would expect the relationship to be lower than a comparison of the same concept with itself (e.g., verbal versus verbal or math versus math). Thus, we are trying to discriminate between two concepts and we would consider this discriminant validity.
Finally, we have the cell on the lower right. Here, we are comparing the verbal written exam with the math teacher observation rating. Like the cell on the upper right, we are also trying to compare two different concepts (verbal versus math) and so this is a discriminant validity estimate. But here, we are also trying to compare two different methods of measurement (written exam versus teacher observation rating). So, we'll call this very discriminant to indicate that we would expect the relationship in this cell to be even lower than in the one above it.
The four cells incorporate the different values that we examine in the multitrait-multimethod approach to estimating construct validity.
When we look at reliability and validity in this way, we see that, rather than being distinct, they actually form a continuum. On one end is the situation where the concepts and methods of measurement are the same (reliability) and on the other is the situation where concepts and methods of measurement are different (very discriminant validity).

Validity in Research Design

by Tariq on January 2, 2009

Conclusions drawn from analyzing survey data are only acceptable to the degree to which they are determined valid.  Validity is used to determine whether research measures what it intended to measure and to approximate the truthfulness of the results. Researchers often use their own definition when it comes to what is considered valid.  In quantitative research testing for validity and reliability is a given.  However some qualitative researchers have gone so far as to suggest that validity does not apply to their research even as they acknowledge the need for some qualifying checks or measures in their work.  This is wrong.   To disregard validity is to put the trustworthiness of your work in question and to call into question others confidence in its results.   Even when qualitative measures are used in research they need to be looked at using measures of reliability and validity in order to sustain the trustworthiness of the results.  Validity and reliability make the difference between “good” and “bad” research reports. Quality research depends on a commitment to testing and increasing the validity as well as the reliability of your research results.

Any research worth its weight is concerned with whether what is being measured is what is intended to be measured and considers the ways in which observations are influenced by the circumstances in which they are made.   The basis of how our conclusions are made play an important role in addressing the broader substantive issues of any given study. For this reason we are going to look at various validity types that have been formulated as a part of legitimate research methodology.

Face Validity
This is the least scientific method of validity as it is not quantified using statistical methods.  This is not validity in a technical sense of the term.  It is concerned with whether it seems like we measure what we claim.  Here we look at how valid a measure appears on the surface and make subjective judgments based off of that.  For example,  if you give a survey that appears to be valid to the respondent and the questions are selected because they look valid to the administer.   The administer asks a group of random people, untrained observers,  if the questions appear valid to them.  In research its never sufficient to rely on face judgments alone and more quantifiable methods of validity are necessary in order to draw acceptable conclusions.  There are many instruments of measurement to consider so face validity is useful in cases where you need to distinguish one approach over another.  Face validity should never be trusted on its own merits.

Content Validity
This is also a subjective measure but unlike face validity we ask whether the content of a measure covers the full domain of the content. If a researcher wanted to measure introversion they would have to first decide what constitutes a relevant domain of content for that trait.  This is considered a subjective form of measurement because it still relies on people’s perception for measuring constructs that would otherwise be difficult to measure.   Where it distinguishes itself is through its use of experts in the field or individuals belonging to a target population.  This study can be made more objective through the use of rigorous statistical tests.  For example you could have a content validity study that informs researchers how items used in a survey represent their content domain, how clear they are, and the extent to which they maintain the theoretical factor structure assessed by the factor analysis.

Construct Validity
A construct represents a collection of behaviors that are associated in a meaningful way to create an image or an idea invented for a research purpose.  Depression is a construct that represents a personality trait which manifests itself in behaviors such as over sleeping, loss of appetite, difficulty concentrating, etc.  The existence of a construct is manifest by observing the collection of related indicators.  Any one sign may be associated with several constructs.  A person with difficulty concentrating may have A.D.D. but not depression.  Construct validity is the degree to which inferences can be made from operationalizations(connecting concepts to observations) in your study to the constructs on which those operationalizations are based.  To establish construct validity you must first provide evidence that your data supports the theoretical structure.  You must also show that you control the operationalization of the construct, in other words, show that your theory has some correspondence with reality.
  • Convergent Validity - the degree to which an operation is similar to other operations it should theoretically be similar to.
  • Discriminative Validity - if a scale adequately differentiates itself or does not differentiate between groups that should differ or not differ based on theoretical reasons or previous research.
  • Nomological Network - representation of the constructs of interest in a study, their observable manifestations, and the interrelationships among and between these.  According to Cronbach and Meehl,  a nomological network has to be developed for a measure in order for it to have construct validity
  • Multitrait-Multimethod Matrix - six major considerations when examining Construct Validity according to Campbell and Fiske.  This includes evaluations of the convergent validity and discriminative validity.  The others are trait method unit, multi-method/trait, truley different methodology, and trait characteristics.
Internal Validity
This refers to the extent to which the independent variable can accurately be stated to produce the observed effect.  If the effect of the dependent variable is only due to the independent variable(s) then internal validity is achieved. This is the degree to which a result can be manipulated.

Statistical Conclusion Validity
A determination of whether a relationship or co-variation exists between cause and effect variables.   Requires ensuring adequate sampling procedures,  appropriate statistical tests, and reliable measurement procedures. This is the degree to which a conclusion is credible or believable.

External Validity
This refers to the extent to which the results of a study can be generalized beyond the sample. Which is to say that you can apply your findings to other people and settings.   Think of this as the degree to which a result can be generalized.

Criterion-Related Validity
Can alternately be referred to as Instrumental Validity. The accuracy of a measure is demonstrated by comparing it with a measure that has been demonstrated to be valid.  In other words, correlations with other measures that have known validity. For this to work you must know that the criterion has been measured well.  And be aware that appropriate criteria do not always exist.  What you are doing is checking the performance of your operationalization against a criteria.  The criteria you use as a standard of judgment accounts for the different approaches you would use:
  • Predictive Validity - operationalization’s ability to predict what it is theoretically able to predict.  The extent to which a measure predicts expected outcomes.
  • Concurrent Validity - operationalization’s ability to distinguish between groups it theoretically should be able to.  This is where a test correlates well with a measure that has been previously validated.
When we look at validity in survey data we are asking whether the data represents what we think it should represent.  We depend on the respondent’s mind set and attitude in order to give us valid data.  In other words we depend on them to answer all questions honestly and conscientiously.  We also depend on whether they are able to answer the questions that we ask.  When questions are asked that the respondent can not comprehend or understand then the data does not tell us what we think it does.

Monday, October 17, 2011

Origin and development of Educational Sociology


Origin and development of Educational Sociology
The development of Educational sociology is divided into three significant stages.

1-     the first stage, that is actually on the work of John Dewey (1859-1952) and Emile Durkheim (1858-1917) 
2-     the second stage, that is after first world war
3-     the third stage, that is after second world war

First stage
This era is actually all about the work of Dewey and Durkheim. John Dewey was the first to appreciate the essential relationship between school and society. He had observed that the old simple life and the village community were inevitably breaking down and that social structure generally was changing. He felt that there were tensions developing between village and town life of which both pupils and adults were quite unconscious.
Therefore, a social spirit of co-operation and mutual aid should be elicited. In order to achieve this aim, Dewey described that school is community in miniature, a micro-society, which both reflected the larger society outside and also sought, in the long run,

Emile Durkheim saw the education as a social thing and argued:

“It is society as a whole and each particular social milieu that determine the ideal that education realizes. Societies can survive only if there exists among its members a sufficient degree of homogeneity; education perpetuates and reinforces this homogeneity by fixing in the child, from the beginning, the essential similarities that collective life demands. But on the other hand, without certain diversity all co-operation would be impossible; education assures the persistence of this necessary diversity by being itself diversified and specialized.”

He further argued that the profound transformation which contemporary societies were then undergoing necessitated corresponding changes in the national education. He concluded his lecture at Sorbonne with these words: ‘I don not believe that I am following a mere prejudice or yielding to an immoderate love for a science which I have cultivated all my life, in saying that never was a sociological approach more necessary for the educator.’

Publications by other scholars

  • W.T. Harris, Educational review (1839)
  • Lesterward, Dynamic sociology (1883)
  • C.A. Scott, Social Education (1907)
  • D. Shea, Social development and education (1909)
  • Kings, social dimensions of education (1912)
  • Bet, Social principles of education (1912)


Second stage
After the First World War, there were not only the economical problems was faced but also sociologically multidimensional problems was occurred. To eliminate the effects of war and enhance the collaboration among people, it was understood that to enhance the relationship of education with the society.
Number of publication was published to discuss the relationship of education and society. Some publications are mentioning below:

Publications by other scholars

  • Kirk Patric, Basis of Sociology (1916)
  • W.R. Smith, Introduction of Educational Sociology (1917)
  • C.L. Robbins, School as a social institution (1918)
  • W.E. Chancellor, Educational  Sociology (1919)
  • F.R. Clow, Educational application of sociology (1920)
  • Senedden, Educational  Sociology (1922)
  • E.R. Groves, Social problems and education (1924)

In 1937, another sociologist affected the role of education in society that is Fred Clarke, the director of London University, institute of Education. He believed that there should be planning in education and in his book Education and Social change, which was published in 1940, Clarke stated that ‘we propose to accept unreservedly what may be called the sociological standpoint and to exhibit as well as we can its concrete application to the field of English education.’



Third stage
Third era started after Second World War to present. In 1940 Karl Mannheim, lecturer of sociology in London school of Economics, discuss the education as one of the dynamic elements in sociology; it was as social technique in itself and a means of social control. In Man and Society he stated:

“Sociologists do not regard education solely as a means of realizing abstract ideals of culture, such as humanism or technical specialization, but as part of the process of influencing men and women. Education can only be understood when we know for what society and for what social position the pupils are being educated.”

After the death of Mannheim, Clark worked further and published his freedom in the educative society, which is likened to the Platonic educative society.
In 1950 W.A. Stewart wrote an important article for the sociological Review. This article still has a lot to offer in a consideration of the content of and the difficulties involved in, a course of training teachers. Prof. Stewart spoke of the ‘traditionally cautious scrutiny’ which the study of sociology had received in Britain.