A sample ranking might look like:
Code: Select all
Judge Xavier Yeats Zane
Alan 1 1 2
Beth 3 2 1
Carter 4 3 3
Dima 2 4 4
Now we have each judge's ranking of the competitors, but we don't know their weighting system. Given the similarity or differences across different judges' rankings, is there anything we can say about how similar their systems are or how different in value the competitors are? In the example above, Alan looks like a better competitor than Dima. Can we make this precise somehow?
It seems like too much information is lost. If the judges have similar ranking systems, then if their rankings seem to agree, the competitors were likely spread out, and if the rankings are random-looking then the competitors were probably bunched together. But if the judges have different enough ranking systems, then the results will probably look random. Is there any way to quantify this?
To make the setting more concrete, an assumption that sounds reasonable to me is to represent competitors by vectors in a large Euclidean space (one dimension for each criterion the judges care about) and represent judges by (semi)norms on this space; a judge evaluates a competitor by taking its norm.
Anyone have any thoughts? Is there any other information about the judges that could reveal their internal correlations with each other? (e.g. their rankings of another set of competitors)