This ACJ web app collects comparative judgment data that can be analyzed with Thurstone's law of comparative judgment (CJ), which is related to Rasch modeling and Item Response Theory in modern day psychometrics. Comparative Judgment is a measurement tool for estimating an unknown paramater attribute for a group of items from the perspective of one or more judges. The parameter investigated with this method should be a holistic attribute all items share to varying degrees.
The value of collecting and crunching comparatice judgment data comes from the resulting information about:
Users of the app are educators who assess other educators' students' work, upload thier own students' work for other educators to assess, or both. Users agree to the Terms of Use and Assessment Privacy Policy protecting personal information.
Users who are judges have been assigned one or more sets of anonymous scripts. Select a set from the Compare menu, where two scripts will be presented side-by-side. Using your own criteria, decide whether the left or right is more ______. (The page will prompt you with the comparison term for that set.) Make sure to view all pages of each PDF file before making a judgement.
Keep making comparisons until you reach the limit when you'll no longer be presented with pairs from that set. As you progress, you will be show pairs that are more and more similar. Use the criteria you've developed for your decisions, but don't overthink your decisions or worry about making the wrong decisions. There will be plenty of them to account for ambiguity.
Select a set from the Comparisons menu to see a table of all comparisons you've made so far. Clicking on the script ID code will let you inspect each one.
Select a set from the My Results menu to see dynamically computed statistics based on the comparisons you've made. The purpose of this table is to show your progress as you make comparisons. The reliability and validity of these rankings and statistics are not sufficient for educational decisionmaking.
Select a set from the Group Results menu to see how ranks and scores are estimated combining the comparisons of up to three judges. When the similarity of the rank order of scripts by three judges reaches acceptable levels, the scores will be finalized.