4.4 KiB
geometry | output | title | date | author |
---|---|---|---|---|
margin=2cm | pdf_document | CSCI 5451 Assignment 1 | \today | | Michael Zhang <zhan4854@umn.edu> $\cdot$ ID: 5289259 |
-
A short description of how you went about parallelizing the classification algorithm. You should include how you decomposed the problem and why, i.e., what were the tasks being parallelized.
The parallelization I used was incredibly simple, just parallelizing outer iterations. I used this same trick for both the OpenMP and the pthreads implementations.
The reason I didn't go further was that further breaking down of the for loops incurred more overhead from managing the parallelization than was actually gained. I have run this several times and the gains were either neglient, or it actually ran slower than the serial version.
This also had to do with the fact that I had already inlined most of the calculations to require as few loops as possible, moved all allocations to the top level, and arranged my data buffer in column-major order instead since the iteration pattern was by dimension rather than by row.
-
Timing results for 1, 2, 4, 8, and 16 threads for the classification. You should include results with outer iterations set to 10.
./lc_pthreads ./dataset/small_data.csv ./dataset/small_data.csv 10 1 Program time (compute): 0.0069s ./lc_pthreads ./dataset/small_data.csv ./dataset/small_data.csv 10 2 Program time (compute): 0.0027s ./lc_pthreads ./dataset/small_data.csv ./dataset/small_data.csv 10 4 Program time (compute): 0.0027s ./lc_pthreads ./dataset/small_data.csv ./dataset/small_data.csv 10 8 Program time (compute): 0.0033s ./lc_pthreads ./dataset/small_data.csv ./dataset/small_data.csv 10 16 Program time (compute): 0.0031s ./lc_pthreads ./dataset/MNIST_data.csv ./dataset/MNIST_label.csv 10 1 Program time (compute): 21.5287s ./lc_pthreads ./dataset/MNIST_data.csv ./dataset/MNIST_label.csv 10 2 Program time (compute): 10.6175s ./lc_pthreads ./dataset/MNIST_data.csv ./dataset/MNIST_label.csv 10 4 Program time (compute): 5.2198s ./lc_pthreads ./dataset/MNIST_data.csv ./dataset/MNIST_label.csv 10 8 Program time (compute): 4.5690s ./lc_pthreads ./dataset/MNIST_data.csv ./dataset/MNIST_label.csv 10 16 Program time (compute): 3.6433s ./lc_openmp ./dataset/small_data.csv ./dataset/small_data.csv 10 1 Program time (compute): 0.0033s ./lc_openmp ./dataset/small_data.csv ./dataset/small_data.csv 10 2 Program time (compute): 0.0017s ./lc_openmp ./dataset/small_data.csv ./dataset/small_data.csv 10 4 Program time (compute): 0.0011s ./lc_openmp ./dataset/small_data.csv ./dataset/small_data.csv 10 8 Program time (compute): 0.0020s ./lc_openmp ./dataset/small_data.csv ./dataset/small_data.csv 10 16 Program time (compute): 0.0032s ./lc_openmp ./dataset/MNIST_data.csv ./dataset/MNIST_label.csv 10 1 Program time (compute): 21.7196s ./lc_openmp ./dataset/MNIST_data.csv ./dataset/MNIST_label.csv 10 2 Program time (compute): 10.4035s ./lc_openmp ./dataset/MNIST_data.csv ./dataset/MNIST_label.csv 10 4 Program time (compute): 5.2449s ./lc_openmp ./dataset/MNIST_data.csv ./dataset/MNIST_label.csv 10 8 Program time (compute): 4.1550s ./lc_openmp ./dataset/MNIST_data.csv ./dataset/MNIST_label.csv 10 16 Program time (compute): 3.5328s
This data was generated using the
run_benchmark.sh > out.txt
script.
NOTES
I noticed that the loss sometimes fluctuates rather wildly. I think this is because there's no fixed learning rate, so instead of going incrementally, we kind of just take each dimension's minimum and haphazardly combine them together. In Wikipedia's description1 of the algorithm, they take the w_i
in particular that results in the minimal loss by itself, and then only use that w_i
for that outer iteration. I'm wondering if this will produce a better convergence, but I tried to stick to implementing the algorithm described in the pdf for the sake of the assignment since I'm guessing the effectiveness of the machine learning model isn't the important thing here.
Also, there's a part in the end of the program that performs validation on the trained model by using a train/test data set split. I didn't count this towards execution time but felt that it was important enough to keep since it ensured that my program was still behaving correctly.