Psychological Testing: Standardization and Reliability
Psychology
Standardization
Standardization: a rule or process of applying a test to a representative sample of respondents for the purpose of setting standards.
- A standardized test has well-defined procedures for clear administration.
- It must have control, and your score includes normative data.
Reliability
Reliability determines the usefulness of the instrument as a measuring tool.
It is the consistency of scores obtained by the same person when retested with the same test at different times or with different reactive groups at or under other conditions of examination.
- When the test is given a score (reliability and correlation), the higher the score, the more reliable the test. It continues to be given to other groups, and they are given the same score or more or less.
- Reliability is the consistency of the results and is vital for predicting psychological traits.
- A test that is not reliable should never be used.
Types of Reliability
- Temporary: This reliability gives consistency to the test results over time. This is because each time you give the instrument, the subjects obtain the same results, meaning the instrument is truly reliable. In this case, the correlation in scores from one administration to another would be perfect and positive, which means it is near 1.
- The closer you get, the stronger the reliability.
- This reliability is measured by the coefficient of reliability, which is out of the process of reviewing and re-examination.
- The test is administered a second time to the same group. The time between administrations varies according to the purposes of research.
- The more time that passes, the lower the expected consistency.
- Once we get the scores of both distributions, they are correlated using the correlation coefficient of people between the first and second administration; they should not spend more than six months.
- Internal Consistency: Refers to consistency within the parts together and reviewing the entire examination. It is a measure of the homogeneity of the instrument or the feature that the instrument is measuring. The same results are not considered by only time; we consider the consistency of the test.
- To reach this type of reliability, there are two ways:
- Divide into halves: The test is divided into two equal halves for a single test (divided), one half with the item pairs and the other with odd items. This will get a reliability coefficient of a single administration of the test or instrument. In this method, the effect of practice and time that is present in the examination and reexamination is not present.
- To estimate the internal consistency reliability by establishing the relationship between each question or item on the test and all test items. This is known as consistency of the item. It takes into account variations in the results; this is due to the content of the same examination, and changes must be to the heterogeneity of the type of conduct sought to evaluate the instrument.
- Judges: The degree of agreement or consistency that exists in two or more reviewers and are known as judges or raters.
- Alternative or equivalent ways: To construct two equivalent ways, as a minimum, and this examination is aimed to estimate the consistency of content of both forms of the instrument. In this way, model or reduce the effects of the practice but not completely eliminated because it should be noted the time lag in the administration of the two forms.
- It is preferable to administer both at the same time.
- Divide into halves