Instrument Reliability is defined as the extent to which an instrument consistently measures what it is supposed to. A child's thermometer would be very reliable as a measurement tool while a personality test would have less reliability.
There are four types of reliability.
1. Test-Retest Reliability is the correlation between two successive measurements with the same test. For example, you can give your test in the morning to your pilot sample and then again in the afternoon. The two sets of data should be highly correlated if the test is reliable. The pilot sample should theoretically answer the same way if nothing has changed.
2. Equivalent Forms Reliability is the successive administration of two parallel forms of the same test. A good example is the SAT. There are two versions that measure Verbal and Math skills. Two forms for measuring Math should be highly correlated and that would document reliability.
3. Split Half Reliability is when, for example, you have the SAT Math test and divide the items on it in two parts. If you correlated the first half of the items with the second half of the items, they should be highly correlated if they are reliable.
4. Internal Consistency Reliability is when only one form of the test is available, or you can ensure that the items are homogeneous or all measuring the same construct. To do this, you use statistical procedures like KR-20 or Cronbach's Alpha.
For more information, see Handbook in Research and Evaluation for Education and the Behavioral Sciences by Stephen Issac and William B. Michael.
There are three factors that affect reliability of an instrument:
*Length -the more questions, the more reliability
*Level of difficulty
* The spread of scores.
Return from instrument reliability to variables and instruments.