Machine Learning Project on Mushroom Classification whether it’s edible or poisonous Part 2

Collecting all String Columns into an Array

%scala

var StringfeatureCol = Array("class", "capshape", "capsurface", "capcolor", "bruises", "odor", "gillattachment", "gillspacing", "gillsize", "gillcolor", "stalkshape", "stalkroot", "stalksurfaceabovering", "stalksurfacebelowring", "stalkcolorabovering", "stalkcolorbelowring", "veiltype", "veilcolor", "ringnumber", "ringtype", "sporeprintcolor", "population", "habitat")

StringIndexer encodes a string column of labels to a column of label indices.

Example of StringIndexer

%scala

import org.apache.spark.ml.feature.StringIndexer

val df = spark.createDataFrame(
  Seq((0, "a"), (1, "b"), (2, "c"), (3, "a"), (4, "a"), (5, "c"))
).toDF("id", "category")

df.show()

val indexer = new StringIndexer()
  .setInputCol("category")
  .setOutputCol("categoryIndex")

val indexed = indexer.fit(df).transform(df)

indexed.show()

Output:

+---+--------+
| id|category|
+---+--------+
|  0|       a|
|  1|       b|
|  2|       c|
|  3|       a|
|  4|       a|
|  5|       c|
+---+--------+

+---+--------+-------------+
| id|category|categoryIndex|
+---+--------+-------------+
|  0|       a|          0.0|
|  1|       b|          2.0|
|  2|       c|          1.0|
|  3|       a|          0.0|
|  4|       a|          0.0|
|  5|       c|          1.0|
+---+--------+-------------+

Define the Pipeline​

A predictive model often requires multiple stages of feature preparation.

A pipeline consists of a series of transformer and estimator stages that typically prepare a DataFrame for modeling and then train a predictive model.

In this case, you will create a pipeline with stages:

  1. A StringIndexer estimator that converts string values to indexes for categorical features
  2. A VectorAssembler that combines categorical features into a single vector
%scala

import org.apache.spark.ml.attribute.Attribute
import org.apache.spark.ml.feature.{IndexToString, StringIndexer}
import org.apache.spark.ml.{Pipeline, PipelineModel}

val indexers = StringfeatureCol.map { colName =>
  new StringIndexer().setInputCol(colName).setOutputCol(colName + "_indexed")
}

val pipeline = new Pipeline()
                    .setStages(indexers)      

val mushroomDF = pipeline.fit(mushroom).transform(mushroom)

Print Schema to view String Columns are converted into equivalent Numerical Columns

%scala

mushroomDF.printSchema()

root
 |-- class: string (nullable = true)
 |-- capshape: string (nullable = true)
 |-- capsurface: string (nullable = true)
 |-- capcolor: string (nullable = true)
 |-- bruises: string (nullable = true)
 |-- odor: string (nullable = true)
 |-- gillattachment: string (nullable = true)
 |-- gillspacing: string (nullable = true)
 |-- gillsize: string (nullable = true)
 |-- gillcolor: string (nullable = true)
 |-- stalkshape: string (nullable = true)
 |-- stalkroot: string (nullable = true)
 |-- stalksurfaceabovering: string (nullable = true)
 |-- stalksurfacebelowring: string (nullable = true)
 |-- stalkcolorabovering: string (nullable = true)
 |-- stalkcolorbelowring: string (nullable = true)
 |-- veiltype: string (nullable = true)
 |-- veilcolor: string (nullable = true)
 |-- ringnumber: string (nullable = true)
 |-- ringtype: string (nullable = true)
 |-- sporeprintcolor: string (nullable = true)
 |-- population: string (nullable = true)
 |-- habitat: string (nullable = true)
 |-- class_indexed: double (nullable = false)
 |-- capshape_indexed: double (nullable = false)
 |-- capsurface_indexed: double (nullable = false)
 |-- capcolor_indexed: double (nullable = false)
 |-- bruises_indexed: double (nullable = false)
 |-- odor_indexed: double (nullable = false)
 |-- gillattachment_indexed: double (nullable = false)
 |-- gillspacing_indexed: double (nullable = false)
 |-- gillsize_indexed: double (nullable = false)
 |-- gillcolor_indexed: double (nullable = false)
 |-- stalkshape_indexed: double (nullable = false)
 |-- stalkroot_indexed: double (nullable = false)
 |-- stalksurfaceabovering_indexed: double (nullable = false)
 |-- stalksurfacebelowring_indexed: double (nullable = false)
 |-- stalkcolorabovering_indexed: double (nullable = false)
 |-- stalkcolorbelowring_indexed: double (nullable = false)
 |-- veiltype_indexed: double (nullable = false)
 |-- veilcolor_indexed: double (nullable = false)
 |-- ringnumber_indexed: double (nullable = false)
 |-- ringtype_indexed: double (nullable = false)
 |-- sporeprintcolor_indexed: double (nullable = false)
 |-- population_indexed: double (nullable = false)
 |-- habitat_indexed: double (nullable = false)

Display Data

Split the Data

It is common practice when building machine learning models to split the source data, using some of it to train the model and reserving some to test the trained model. In this project, you will use 70% of the data for training, and reserve 30% for testing.

%scala

val splits = mushroomDF.randomSplit(Array(0.7, 0.3))
val train = splits(0)
val test = splits(1)
val train_rows = train.count()
val test_rows = test.count()
println("Training Rows: " + train_rows + " Testing Rows: " + test_rows)

Prepare the Training Data

To train the Classification model, you need a training data set that includes a vector of numeric features, and a label column. In this project, you will use the VectorAssembler class to transform the feature columns into a vector, and then rename the ClickedonAd column to the label.

VectorAssembler()

VectorAssembler(): is a transformer that combines a given list of columns into a single vector column. It is useful for combining raw features and features generated by different feature transformers into a single feature vector, in order to train ML models like logistic regression and decision trees.

VectorAssembler accepts the following input column types: all numeric types, boolean type, and vector type.

In each row, the values of the input columns will be concatenated into a vector in the specified order.

%scala

import org.apache.spark.sql.functions._
import org.apache.spark.sql.Row
import org.apache.spark.sql.types._

import org.apache.spark.ml.classification.LogisticRegression
import org.apache.spark.ml.feature.VectorAssembler

val assembler = new VectorAssembler().setInputCols(Array("capshape_indexed", "capsurface_indexed", "capcolor_indexed", "bruises_indexed", "odor_indexed",
"gillattachment_indexed", "gillspacing_indexed", "gillsize_indexed", "gillcolor_indexed", "stalkshape_indexed", "stalkroot_indexed",
"stalksurfaceabovering_indexed", "stalksurfacebelowring_indexed", "stalkcolorabovering_indexed", "stalkcolorbelowring_indexed", 
"veiltype_indexed", "veilcolor_indexed", "ringnumber_indexed", "ringtype_indexed", "sporeprintcolor_indexed", "population_indexed", "habitat_indexed")).setOutputCol("features")

val training = assembler.transform(train).select($"features", $"class_indexed".alias("label"))

training.show(false)

Output

+--------------------------------------------------------------------------------------+-----+
|features                                                                              |label|
+--------------------------------------------------------------------------------------+-----+
|(22,[0,1,2,6,8,9,10,11,12,17,20,21],[3.0,2.0,1.0,1.0,4.0,1.0,1.0,1.0,1.0,1.0,3.0,1.0])|0.0  |
|(22,[0,1,2,6,8,9,10,11,17,20,21],[3.0,2.0,1.0,1.0,4.0,1.0,1.0,1.0,1.0,2.0,1.0])       |0.0  |
|(22,[0,1,2,6,8,9,10,12,17,20,21],[3.0,2.0,1.0,1.0,4.0,1.0,1.0,1.0,1.0,3.0,1.0])       |0.0  |
|(22,[0,1,2,6,8,9,10,17,20,21],[3.0,2.0,1.0,1.0,4.0,1.0,1.0,1.0,3.0,1.0])              |0.0  |
|(22,[0,1,2,6,8,9,10,17,20,21],[3.0,2.0,1.0,1.0,4.0,1.0,1.0,1.0,2.0,1.0])              |0.0  |
|(22,[0,1,2,6,8,9,10,11,17,20,21],[3.0,2.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,3.0,1.0])       |0.0  |
|(22,[0,1,2,6,8,9,10,12,17,20,21],[3.0,2.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,3.0,1.0])       |0.0  |
|(22,[0,1,2,6,8,9,10,12,17,20,21],[3.0,2.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,2.0,1.0])       |0.0  |
|(22,[0,1,2,6,8,9,10,17,20,21],[3.0,2.0,1.0,1.0,1.0,1.0,1.0,1.0,3.0,1.0])              |0.0  |
|(22,[0,1,2,6,8,9,10,17,20,21],[3.0,2.0,1.0,1.0,1.0,1.0,1.0,1.0,2.0,1.0])              |0.0  |
|(22,[0,1,2,6,8,9,10,11,12,17,20,21],[3.0,2.0,1.0,1.0,2.0,1.0,1.0,1.0,1.0,1.0,3.0,1.0])|0.0  |
|(22,[0,1,2,6,8,9,10,11,12,17,20,21],[3.0,2.0,1.0,1.0,2.0,1.0,1.0,1.0,1.0,1.0,2.0,1.0])|0.0  |
|(22,[0,1,2,6,8,9,10,11,17,20,21],[3.0,2.0,1.0,1.0,2.0,1.0,1.0,1.0,1.0,3.0,1.0])       |0.0  |
|(22,[0,1,2,6,8,9,10,11,17,20,21],[3.0,2.0,1.0,1.0,2.0,1.0,1.0,1.0,1.0,2.0,1.0])       |0.0  |
|(22,[0,1,2,6,8,9,10,12,17,20,21],[3.0,2.0,1.0,1.0,2.0,1.0,1.0,1.0,1.0,3.0,1.0])       |0.0  |
|(22,[0,1,2,6,8,9,10,17,20,21],[3.0,2.0,1.0,1.0,2.0,1.0,1.0,1.0,2.0,1.0])              |0.0  |
|(22,[0,1,2,6,8,9,10,11,12,17,20,21],[3.0,2.0,4.0,1.0,4.0,1.0,1.0,1.0,1.0,1.0,3.0,1.0])|0.0  |
|(22,[0,1,2,6,8,9,10,11,12,17,20,21],[3.0,2.0,4.0,1.0,4.0,1.0,1.0,1.0,1.0,1.0,2.0,1.0])|0.0  |
|(22,[0,1,2,6,8,9,10,11,17,20,21],[3.0,2.0,4.0,1.0,4.0,1.0,1.0,1.0,1.0,3.0,1.0])       |0.0  |
|(22,[0,1,2,6,8,9,10,11,17,20,21],[3.0,2.0,4.0,1.0,4.0,1.0,1.0,1.0,1.0,2.0,1.0])       |0.0  |
+--------------------------------------------------------------------------------------+-----+
only showing top 20 rows

Train a Classification Model

Next, you need to train a Classification model using the training data. To do this, create an instance of the LogisticRegression algorithm you want to use and use its fit method to train a model based on the training DataFrame. In this project, you will use a Logistic Regression Classifier algorithm – though you can use the same technique for any of the regression algorithms supported in the spark.ml API

%scala
import org.apache.spark.ml.classification.LogisticRegression

val lr = new LogisticRegression().setLabelCol("label").setFeaturesCol("features").setMaxIter(10).setRegParam(0.3)
val model = lr.fit(training)
println("Model Trained!")

Prepare the Testing Data

Now that you have a trained model, you can test it using the testing data you reserved previously. First, you need to prepare the testing data in the same way as you did the training data by transforming the feature columns into a vector. This time you’ll rename the class column to trueLabel.

%scala

val testing = assembler.transform(test).select($"features", $"class_indexed".alias("trueLabel"))
testing.show()

Output:

+--------------------+---------+
|            features|trueLabel|
+--------------------+---------+
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
|(22,[0,1,2,6,8,9,...|      0.0|
+--------------------+---------+
only showing top 20 rows

Test the Model

Now you’re ready to use the transform method of the model to generate some predictions. You can use this approach to predict the class; but in this case, you are using the test data which includes a known true label value, so you can compare the Class

%scala

%scala

val prediction = model.transform(testing)
val predicted = prediction.select("features", "prediction", "trueLabel")
predicted.show()

Output:

+--------------------+----------+---------+
|            features|prediction|trueLabel|
+--------------------+----------+---------+
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,5,8,9,10...|       0.0|      0.0|
|(22,[0,1,5,8,9,10...|       0.0|      0.0|
|(22,[0,1,5,8,9,10...|       0.0|      0.0|
|(22,[0,1,5,8,9,10...|       0.0|      0.0|
|(22,[0,1,5,8,9,10...|       0.0|      0.0|
|(22,[0,1,5,8,9,10...|       0.0|      0.0|
|(22,[0,1,5,8,9,10...|       0.0|      0.0|
|(22,[0,1,5,8,9,10...|       0.0|      0.0|
|(22,[0,1,5,8,9,10...|       0.0|      0.0|
|(22,[0,1,5,8,9,10...|       0.0|      0.0|
|(22,[0,1,5,8,9,10...|       0.0|      0.0|
|(22,[0,1,5,8,9,10...|       0.0|      0.0|
+--------------------+----------+---------+      

Evaluating a Model (We got 97% Accuracy)

%scala

import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator

val evaluator = new BinaryClassificationEvaluator().setLabelCol("trueLabel").setRawPredictionCol("rawPrediction").setMetricName("areaUnderROC")
val auc = evaluator.evaluate(prediction)
println("AUC = " + (auc))

Output:

AUC = 0.9739827185870374
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
evaluator: org.apache.spark.ml.evaluation.BinaryClassificationEvaluator = BinaryClassificationEvaluator: uid=binEval_7c586b29a6e6, metricName=areaUnderROC, numBins=1000
auc: Double = 0.9739827185870374

Trying a Different Classification Model

We will use Decision Tree Classification

%scala

import org.apache.spark.ml.classification.DecisionTreeClassificationModel
import org.apache.spark.ml.classification.DecisionTreeClassifier
import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator

val dt = new DecisionTreeClassifier().setLabelCol("label").setFeaturesCol("features")

val model = dt.fit(training)

println("Model Trained!")

Testing Decision Tree Classification model

%scala

val prediction = model.transform(testing)
val predicted = prediction.select("features", "prediction", "trueLabel")
predicted.show()

Output:

+--------------------+----------+---------+
|            features|prediction|trueLabel|
+--------------------+----------+---------+
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
|(22,[0,1,2,6,8,9,...|       0.0|      0.0|
+--------------------+----------+---------+      

Evaluating a Model (We got 99% Accuracy)

%scala

val evaluator = new MulticlassClassificationEvaluator()
  .setLabelCol("trueLabel")
  .setPredictionCol("prediction")
  .setMetricName("accuracy")
val accuracy = evaluator.evaluate(prediction)

Output:

evaluator: org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator = MulticlassClassificationEvaluator: uid=mcEval_1f7de9484e84, metricName=accuracy, metricLabel=0.0, beta=1.0, eps=1.0E-15
accuracy: Double = 0.9983518747424804
By Bhavesh