In this assignment, you will build a spam classifier trained using stochastic gradient descent in Spark, replicating the work described in Efficient and Effective Spam Filtering and Re-ranking for Large Web Datasets by Cormack, Smucker, and Clarke. We will draw your attention to specific sections of the paper that are directly pertinent to the assignment, but you should read the entire paper for background.
If you're working on your laptop, grab the training and test data:
wget https://www.student.cs.uwaterloo.ca/~cs451/spam/spam.train.group_x.txt.bz2 wget https://www.student.cs.uwaterloo.ca/~cs451/spam/spam.train.group_y.txt.bz2 wget https://www.student.cs.uwaterloo.ca/~cs451/spam/spam.train.britney.txt.bz2 wget https://www.student.cs.uwaterloo.ca/~cs451/spam/spam.test.qrels.txt.bz2
Otherwise, the datasets are at ~cs451/public_html/spam/
in the
Linux Student CS environment.
Just to verify what you've downloaded:
File | MD5 | Size |
spam.train.group_x.txt.bz2 | 947faf932afee7e35d79e7da10fe0e3e | 5.5 MB |
spam.train.group_y.txt.bz2 | 7cf45c3666915999f1048aafeff4c60e | 6.6 MB |
spam.train.britney.txt.bz2 | bad2e4ccaed7482f9e99e65e58c6beda | 248 MB |
spam.test.qrels.txt.bz2 | 99858d9dd1b40e994732a641703859ec | 303 MB |
After you've downloaded the data, unpack:
bunzip2 spam.train.group_x.txt.bz2 bunzip2 spam.train.group_y.txt.bz2 bunzip2 spam.train.britney.txt.bz2 bunzip2 spam.test.qrels.txt.bz2
Verify the unpacked data:
File | MD5 | Size |
spam.train.group_x.txt | d6897ed8319c71604b1278b660a479b6 | 25 MB |
spam.train.group_y.txt | 4d103821fdf369be526347b503655da5 | 20 MB |
spam.train.britney.txt | b52d54caa20325413491591f034b5e7b | 766 MB |
spam.test.qrels.txt | df1d26476ec41fec625bc2eb9969875c | 1.1 GB |
Next, download the two files you'll need for evaluating the output of the spam classifier (links below):
Compile the C program:
gcc -O2 -o compute_spam_metrics compute_spam_metrics.c -lm
You might get some warnings but don't worry—the code should
compile fine. The actual evaluation script spam_eval.sh
(and spam_eval_hdfs.sh
)
calls compute_spam_metrics
, so make sure they're in the
same directory.
Note on local vs. the Datasci cluster: for this assignment, your code must (eventually) work on the Datasci cluster, but feel free to develop locally or in the Linux Student CS environment. The instructions below are written for running locally, but in a separate section later we will cover details specific to the Datasci cluster.
In this assignment, we'll take you through building spam classifiers of increasing complexity, but let's start with a basic implementation using stochastic gradient descent. Build the spam classifier in exactly the way we describe below, because later parts of the assignment will depend on the setup.
First, let's write the classifier trainer. The classifier trainer takes all the training instances, runs stochastic gradient descent, and produces a model as output.
Look at the Cormack, Smucker, and Clarke paper: the entire algorithm is literally 34 lines of C, shown in Figure 2 on page 10. The stochastic gradient descent update equations are in equations (11) and (12) on page 11. We actually made things even simpler for you: the features used in the spam classifier are hashed byte 4-grams (thus, integers)—we've pre-computed the features for you.
Take a look at spam.train.group_x.txt
. The first line
begins as follows:
clueweb09-en0094-20-13546 spam 387908 697162 426572 161118 688171 ...
In the file, each training instance is on a line. Each line begins with a document id, the string "spam" or "ham" (the label), and a list of integers (the features).
Therefore, your spam classifier will look something like this:
// w is the weight vector (make sure the variable is within scope) val w = Map[Int, Double]() // Scores a document based on its list of features. def spamminess(features: Array[Int]) : Double = { var score = 0d features.foreach(f => if (w.contains(f)) score += w(f)) score } // This is the main learner: val delta = 0.002 // For each instance... val isSpam = ... // label val features = ... // feature vector of the training instance // Update the weights as follows: val score = spamminess(features) val prob = 1.0 / (1 + exp(-score)) features.foreach(f => { if (w.contains(f)) { w(f) += (isSpam - prob) * delta } else { w(f) = (isSpam - prob) * delta } })
We've given you the code fragment for the learner above as a starting point—it's your job to understand exactly how it works and turn it into a complete classifier trainer in Spark.
For the structure of the Spark trainer program, take a look at Part 07b, slide 13. We're going to build the configuration shown there (even though the slide says MapReduce, we're implementing it in Spark). Specifically, we're going run a single reducer to make sure we pump all the training instances through a single learner on the reducer end. The overall structure of your program is going to look something like this:
val textFile = sc.textFile("/path/to/training/data") val trained = textFile.map(line =>{ // Parse input // .. (0, (docid, isSpam, features)) }).groupByKey(1) // Then run the trainer... trained.saveAsTextFile(...)
Note the mappers are basically just parsing the feature vectors and
pushing them over to the reducer side for additional processing. We
emit "0" as a "dummy key" to make sure all the training instances get
collected at the reducer end via groupByKey()
... after
which you run the trainer (which applies the SGD updates, per
above). Of course, it's your job to figure out how to connect the
pieces together. This is the crux of the assignment.
Putting everything together, you will write a trainer program
called TrainSpamClassifier
that we will execute in the
following manner:
spark-submit --driver-memory 2g --class ca.uwaterloo.cs451.a5.TrainSpamClassifier \ target/assignments-1.0.jar --input spam.train.group_x.txt --model cs451-bigdatateach-a5-model-group_x
The --input
option specifies the input training
instances (from above); the --model
option specifies the
output directory where the model goes. Inside the model
directory cs451-bigdatateach-a5-model-group_x
, there
should be a single file, part-00000
, that contains the
trained model. The trained model should be a sequence of tuples, one
on each line; each tuple should contain a feature and its weight (a
double value). Something like:
$ head -5 cs451-bigdatateach-a5-model-group_x/part-00000 (547993,2.019484093190069E-4) (577107,5.255371091500805E-5) (12572,-4.40967560913553E-4) (270898,-0.001340150007664197) (946531,2.560528666942676E-4)
Next, you will write another Spark program
named ApplySpamClassifier
that will apply the trained spam
classifier to the test instances. That is, the program will read in
each input instance, compute the spamminess score (from above), and
make a prediction: if the spamminess score is above 0, classify the
document as spam; otherwise, classify the document as ham.
We will run the program in the following manner:
spark-submit --driver-memory 2g --class ca.uwaterloo.cs451.a5.ApplySpamClassifier \ target/assignments-1.0.jar --input spam.test.qrels.txt \ --output cs451-bigdatateach-a5-test-group_x --model cs451-bigdatateach-a5-model-group_x
The --input
option specifies the input test instances;
the --model
option specifies the classifier model; and
the --output
option specifies the output directory. The
test data is organized in exactly the same way as the training data.
The output of ApplySpamClassifier
should be organized as
follows:
$ cat cs451-bigdatateach-a5-test-group_x/* | sort | head -5 (clueweb09-en0000-00-00142,spam,2.601624279252943,spam) (clueweb09-en0000-00-01005,ham,2.5654162439491004,spam) (clueweb09-en0000-00-01382,ham,2.5893946346394188,spam) (clueweb09-en0000-00-01383,ham,2.6190102258752614,spam) (clueweb09-en0000-00-03449,ham,1.500142758578532,spam)
The first field in each tuple is the document id and the second field is the test label. These are just copied from the test data. The third field is the spamminess score, and the fourth field is the classifier's prediction.
Important: It is absolutely critical that your classifier does not use the label in the test data when making its predictions. The only reason the label is included in the output is to facilitate evaluation (see below).
Finally, you can evaluate your results:
$ ./spam_eval.sh cs451-bigdatateach-a5-test-group_x 1-ROCA%: 17.25
The eval script prints the evaluation metric, which is the area under the receiver operating characteristic (ROC) curve. This is a common way to characterize classifier error. The lower this score (i.e., 1 - ROCA), the better.
If you've done everything correctly up until now, you should be able to replicate the above results.
You should then be able to train on the group_y
training set:
spark-submit --driver-memory 2g --class ca.uwaterloo.cs451.a5.TrainSpamClassifier \ target/assignments-1.0.jar --input spam.train.group_y.txt --model cs451-bigdatateach-a5-model-group_y
And make predictions:
spark-submit --driver-memory 2g --class ca.uwaterloo.cs451.a5.ApplySpamClassifier \ target/assignments-1.0.jar --input spam.test.qrels.txt \ --output cs451-bigdatateach-a5-test-group_y --model cs451-bigdatateach-a5-model-group_y
And evaluate:
$ ./spam_eval.sh cs451-bigdatateach-a5-test-group_y 1-ROCA%: 12.82
Finally, train on the britney
training set:
spark-submit --driver-memory 2g --class ca.uwaterloo.cs451.a5.TrainSpamClassifier \ target/assignments-1.0.jar --input spam.train.britney.txt --model cs451-bigdatateach-a5-model-britney
And make predictions:
spark-submit --driver-memory 2g --class ca.uwaterloo.cs451.a5.ApplySpamClassifier \ target/assignments-1.0.jar --input spam.test.qrels.txt \ --output cs451-bigdatateach-a5-test-britney --model cs451-bigdatateach-a5-model-britney
And evaluate:
$ ./spam_eval.sh cs451-bigdatateach-a5-test-britney 1-ROCA%: 15.96
There may be some non-determinism in running over
the britney
dataset, so you might get something slightly
different.
Here's a placeholder for question 1 that you're going to answer below (see section on the Datasci cluster).
Next, let's build an ensemble classifier. Start by gathering all the models from each of the individual classifiers into a common directory:
mkdir cs451-bigdatateach-a5-model-fusion cp cs451-bigdatateach-a5-model-group_x/part-00000 cs451-bigdatateach-a5-model-fusion/part-00000 cp cs451-bigdatateach-a5-model-group_y/part-00000 cs451-bigdatateach-a5-model-fusion/part-00001 cp cs451-bigdatateach-a5-model-britney/part-00000 cs451-bigdatateach-a5-model-fusion/part-00002
With these three separate classifiers, implement two different ensemble techniques:
Write a program ApplyEnsembleSpamClassifier
that we
will execute in the following manner:
spark-submit --driver-memory 2g --class ca.uwaterloo.cs451.a5.ApplyEnsembleSpamClassifier \ target/assignments-1.0.jar --input spam.test.qrels.txt \ --output cs451-bigdatateach-a5-test-fusion-average --model cs451-bigdatateach-a5-model-fusion --method average
The --input
option specifies the input test instances.
The --model
option specifies the base directory of all
the classifier models; in this directory your program should expect
each individual model in a part-XXXXX
file; it's okay to
hard code the part files for convenience. The --output
option specifies the output directory. Finally,
the --method
option specifies the ensemble technique,
either "average" or "vote" per above.
Your prediction program needs to load all three models, apply the specified ensemble technique, and make predictions. Hint: Spark broadcast variables are helpful in this implementation.
The output format of the predictions should be the same as the
output of the ApplySpamClassifier
program. You should be
able to evaluate with spam_eval.sh
in the same way. Go
ahead and predict with the two ensemble techniques and evaluate the
predictions. Note that ensemble techniques can sometimes improve on
the best classifier; sometimes not.
Here's a placeholder for questions 2 and 3 that you're going to answer below (see section on the Datasci cluster).
How does the ensemble compare to just concatenating all the training data together and training a single classifier? Let's find out:
cat spam.train.group_x.txt spam.train.group_y.txt spam.train.britney.txt > spam.train.all.txt
Now train on this larger test set, predict, and evaluate.
Here's a placeholder for question 4 that you're going to answer below (see section on the Datasci cluster).
In class, we talked about how a model trained using stochastic gradient descent is dependent on the order in which the training instances are presented to the trainer. Let's explore this effect.
Modify the TrainSpamClassifier
to implement a new
option --shuffle
. With this option, the program will
randomly shuffle the training instances before running the
trainer:
spark-submit --driver-memory 2g --class ca.uwaterloo.cs451.a5.TrainSpamClassifier \ target/assignments-1.0.jar --input spam.train.britney.txt --model cs451-bigdatateach-a5-model-britney-shuffle --shuffle
You must shuffle the data using Spark. The way to accomplish this in Spark is to generate a random number for each instance and then sort the instances by the value. That is, you cannot simply read all the training instances into memory in the driver, shuffle, and then parallelize.
Obviously, the addition of the --shuffle
option should
not break existing functionality; that is, without the option, the
program should behave exactly as before.
Note that in this case we're working with the britney
data because the two other datasets have very few
examples—random shuffles can lead to weird idiosyncratic
effects.
You should be able to evaluate the newly trained model in exactly the same way as above. If you are getting a wildly different 1-ROCA% scores each time, you're doing something wrong.
Here's a placeholder for question 5 that you're going to answer below.
You are free to develop locally on your own machine or in the Linux Student CS environment (and in fact, the instructions above assume so), but you must make sure that your code runs in the Datasci cluster also. This is just to verify that your Spark programs will work in a distributed environment, and that you are not inadvertently taking advantage of some local feature.
All training and test data are located in /data/cs451/
on HDFS. Note that
spam.train.all.txt
has already been prepared for you in
that directory also.
For example, training, predicting, and evaluating on
the group_x
dataset on the Datasci cluster:
spark-submit --class ca.uwaterloo.cs451.a5.TrainSpamClassifier \ --num-executors 2 --executor-cores 2 --executor-memory 8G \ target/assignments-1.0.jar --input /data/cs451/spam.train.group_x.txt \ --model cs451-bigdatateach-a5-model-group_x spark-submit --class ca.uwaterloo.cs451.a5.ApplySpamClassifier \ --num-executors 2 --executor-cores 2 --executor-memory 8G \ target/assignments-1.0.jar --input /data/cs451/spam.test.qrels.txt \ --output cs451-bigdatateach-a5-test-group_x --model cs451-bigdatateach-a5-model-group_x ./spam_eval_hdfs.sh cs451-bigdatateach-a5-test-group_x
The major differences are:
spam_eval_hdfs.sh
for the evaluation script.--num-executors
). These are the settings that we'll use, so please don't change.Refer back to the placeholders above and answer the following questions, running your code on the Datasci cluster:
Question 1: For each individual classifiers trained
on group_x
, group_y
,
and britney
, what are the 1-ROCA% scores? You should be
able to replicate our results
on group_x
, group_y
, but there may be some
non-determinism for britney
, which is why we want you to
report the figures.
Question 2: What is the 1-ROCA% score of the score averaging technique in the 3-classifier ensemble?
Question 3: What is the 1-ROCA% score of the voting technique in the 3-classifier ensemble?
Question 4: What is the 1-ROCA% score of a single classifier trained on all available training data concatenated together?
Question 5: Run the shuffle trainer 10 times on
the britney
dataset, predict and evaluate the classifier
on the test data each time. Report the 1-ROCA% score in each of the
ten trials and compute the overall average.
Please follow these instructions carefully!
Make sure your repo has the following items:
bigdata2021f/assignment5.md
.ca.uwaterloo.cs451.a5
. At the minimum, you should have
TrainSpamClassifier
, ApplySpamClassifier
,
and ApplyEnsembleSpamClassifier
. Feel free to include helper code also.Make sure your implementation runs on the Datasci cluster. The following check script is provided for you (make sure you set the --env flag):
When you've done everything, commit to your repo and remember to push back to origin. You should be able to see your edits in the web interface. Before you consider the assignment "complete", we would recommend that you verify everything above works by performing a clean clone of your repo and run the public check scripts.
That's it!
The entire assignment is worth 60 points:
TrainSpamClassifier
is worth 15 points.ApplySpamClassifier
is
worth 5 points.ApplyEnsembleSpamClassifier
is worth 6 points.--shuffle
option
in TrainSpamClassifier
is worth 5 points.sc.textFile(args.input(), 1)
to read the file. Check the API to learn what the second argument (i.e., 1) does!
To help you gauge the efficiency of your solution, we are giving you the running times of our reference implementations. Keep in mind that these are machine dependent and can vary depending on the server/cluster load.
Class name | Running time Linux | Running time Datasci |
---|---|---|
TrainSpamClassifier - group x | 9 seconds | 30 seconds |
TrainSpamClassifier - group y | 9 seconds | 30 seconds |
TrainSpamClassifier - britney | 75 seconds | 90 seconds |
TrainSpamClassifier - all | 1 minute 44 seconds | 90 seconds |
TrainSpamClassifier - shuffle britney | 1 minute 40 seconds | 1 minute 40 seconds |
ApplySpamClassifier | 15 seconds | 45 seconds |
ApplyEnsembleSpamClassifier | 24 seconds | 74 seconds |