Ya, sorry, I got a brain fart with the (A,B,C) thing.
Your choice of weight function basically decides that the optimal choices are. Even a reasonable restriction (it is monotonicly increasing for each student if they get a higher choice) only restricts you to Pareto optimial solutions.
Basically, your choice of weight function picks which solution is optimal.
The easiest optimization is the greedy algorithm -- pick a student at random, give them their first choice, repeat until done. A variation is adding some jitter to it -- so you sometimes see if dislodging a prior student from their optimal choice is useful. Hill climbing with mutation algorithms, or genetic algorithms, give yet other ways to examine the search space.
It is easy to introduce perverse incentives to such a scheme. Ie, where failing to have additional choices screws you, or picking 3 really popular choices as your first 3 gives you your 4th pretty much guaranteed (as opposed to picking your real choice first, then the 3 really popular as 2nd, 3rd and 4th, which might be more likely to get you your 5th choice...) On the other hand, if they don't get to examine it, and you change it fast enough, people can't learn the perverse incentives.