We model an assignment problem in a small private school. Each grade of 80 to 100 students must be divided into four all-day class groups. In assigning students to classes, the school aims to satisfy several hard constraints (e.g. not placing certain pairs of students in the same class or making sure that all for classes have roughly the same number of students) and soft constraints (e.g. putting two specific friends together). In previous work, a model was developed to assist with creating these class placements. Following the constraints and objectives given by administrators at the school, the problem is implemented as:
Typically, the school wants to see multiple good solutions that are as different from each other as possible. The reason for this is that they might see a placement that scores well from a mathematical perspective in the model, but that won't work well in practice.
The previous method was not well suited to finding multiple solutions, so we decided to create an implementation that skips the second step and instead uses integer linear programming to generate many feasible solutions, which are then scored based on non-linear criteria. But to find solutions that score well this way, it is necessary to generate many feasible solutions using the ILP and then pick the best ones at the end. A limitation of doing this, however, is that creating many feasible solutions is a lengthy process. We wish to improve on this by using parallelization.
Can we parallelize this model and achieve meaningful speed improvements by splitting up the work?
All of our data comes from actual data inputed by adminstrators or teachers at the school. We removed all students' names to protect their identities. We put the data into various tab delimited text files that can be read in easily by our software. We also included an excel file in our GitHub repo, which has all of our data in one file.
We achieved excellent speedups (over 100x) by parallelizing this problem using MPI. We were able to move from a run time of almost 4 hours down to about 2 minutes using a cluster of 256 processors. MPI proved a powerful and flexible way to parallelize LP solving. You can read more about the project results here.