Machine Learning Project for Predicting will it Rain Tomorrow in Australia
Problem Statement or Business Problem
In this project we will be working with a data set, indicating whether it rain the next day in Australia, Yes or No? This column is Yes if the rain for that day was 1mm or more. We will try to create a model that will predict using the available data.
Attribute Information or Dataset Details:
- Date -The date of observation
- Location – The common name of the location of the weather station
- MinTemp – The minimum temperature in degrees celsius
- MaxTemp – The maximum temperature in degrees celsius
- Rainfall – The amount of rainfall recorded for the day in mm
- Evaporation – The so-called Class A pan evaporation (mm) in the 24 hours to 9 am.
- Sunshine – The number of hours of bright sunshine in the day.
- WindGustDir – The direction of the strongest wind gust in the 24 hours to midnight
- WindGustSpeed – The speed (km/h) of the strongest wind gust in the 24 hours to midnight
- WindDir9am – Direction of the wind at 9 am
- WindDir3pm – Direction of the wind at 3 pm
- WindSpeed9am – Wind speed (km/hr) averaged over 10 minutes prior to 9 am
- WindSpeed3pm – Wind speed (km/hr) averaged over 10 minutes prior to 3 pm
- Humidity9am – Humidity (percent) at 9 am
- Humidity3pm – Humidity (percent) at 3 pm
- Pressure9am – Atmospheric pressure (hpa) reduced to mean sea level at 9 am
- Pressure3pm – Atmospheric pressure (hpa) reduced to mean sea level at 3 pm
- Cloud9am – Fraction of sky obscured by cloud at 9 am. This is measured in “oktas”, which are a unit of eigths. It records how many eigths of the sky are obscured by cloud. A 0 measure indicates a completely clear sky whilst an 8 indicates that it is completely overcast.
- Cloud3pm – Fraction of sky obscured by cloud (in “oktas”: eighths) at 3pm. See Cload9am for a description of the values
- Temp9am – Temperature (degrees C) at 9am
- Temp3pm – Temperature (degrees C) at 3pm
- RainToday – Boolean: 1 if precipitation (mm) in the 24 hours to 9am exceeds 1mm, otherwise 0
- RainTomorrow- The amount of next day rain in mm. Used to create response variable RainTomorrow. A kind of measure of the “risk”.
Technology Used
- Apache Spark
- Spark SQL
- Apache Spark MLLib
- Scala
- DataFrame-based API
- Databricks Notebook
Introduction
Welcome to this project on predict whether it will rain tomorrow in Australia in Apache Spark Machine Learning using Databricks platform community edition server which allows you to execute your spark code, free of cost on their server just by registering through email id.
In this project, we explore Apache Spark and Machine Learning on the Databricks platform.
I am a firm believer that the best way to learn is by doing. That’s why I haven’t included any purely theoretical lectures in this tutorial: you will learn everything on the way and be able to put it into practice straight away. Seeing the way each feature works will help you learn Apache Spark machine learning thoroughly by heart.
We’re going to look at how to set up a Spark Cluster and get started with that. And we’ll look at how we can then use that Spark Cluster to take data coming into that Spark Cluster, a process that data using a Machine Learning model, and generate some sort of output in the form of a prediction. That’s pretty much what we’re going to learn about the predictive model.
In this project, we will be performing predict will it Rain Tomorrow in Australia.
We will learn:
Preparing the Data for Processing.
Basics flow of data in Apache Spark, loading data, and working with data, this course shows you how Apache Spark is perfect for a Machine Learning job.
Learn the basics of Databricks notebook by enrolling in Free Community Edition Server
Define the Machine Learning Pipeline
Train a Machine Learning Model
Testing a Machine Learning Model
Evaluating a Machine Learning Model (i.e. Examine the Predicted and Actual Values)
The goal is to provide you with practical tools that will be beneficial for you in the future. While doing that, you’ll develop a model with a real use opportunity.
I am really excited you are here, I hope you are going to follow all the way to the end of the Project. It is fairly straight forward fairly easy to follow through the article we will show you step by step each line of code & we will explain what it does and why we are doing it.
Free Account creation in Databricks
Creating a Spark Cluster
Basics about Databricks notebook
Loading Data into Databricks Environment
Download Data
Load Data in Dataframe using User-defined Schema
%scala import org.apache.spark.sql.Encoders import java.sql.Date case class Aus(Dates: String, Location: String, MinTemp: Double, MaxTemp: Double, Rainfall: Double, Evaporation: String, Sunshine: String, WindGustDir: String, WindGustSpeed: Integer, WindDir9am: String, WindDir3pm: String, WindSpeed9am: Integer, WindSpeed3pm: Integer, Humidity9am: Integer, Humidity3pm: Integer, Pressure9am: Double, Pressure3pm: Double, Cloud9am: Integer, Cloud3pm: Integer, Temp9am: Double, Temp3pm: Double, RainToday: String, RISK_MM: Double, RainTomorrow: String) val AusSchema = Encoders.product[Aus].schema val AusDF = spark.read.schema(AusSchema).option("header", "true").csv("/FileStore/tables/weatherAUS-1.csv").na.fill(0) AusDF.show() Output: +----------+--------+-------+-------+--------+-----------+--------+-----------+-------------+----------+----------+------------+------------+-----------+-----------+-----------+-----------+--------+--------+-------+-------+---------+-------+------------+ | Dates|Location|MinTemp|MaxTemp|Rainfall|Evaporation|Sunshine|WindGustDir|WindGustSpeed|WindDir9am|WindDir3pm|WindSpeed9am|WindSpeed3pm|Humidity9am|Humidity3pm|Pressure9am|Pressure3pm|Cloud9am|Cloud3pm|Temp9am|Temp3pm|RainToday|RISK_MM|RainTomorrow| +----------+--------+-------+-------+--------+-----------+--------+-----------+-------------+----------+----------+------------+------------+-----------+-----------+-----------+-----------+--------+--------+-------+-------+---------+-------+------------+ |2008-12-01| Albury| 13.4| 22.9| 0.6| NA| NA| W| 44| W| WNW| 20| 24| 71| 22| 1007.7| 1007.1| 8| 0| 16.9| 21.8| No| 0.0| No| |2008-12-02| Albury| 7.4| 25.1| 0.0| NA| NA| WNW| 44| NNW| WSW| 4| 22| 44| 25| 1010.6| 1007.8| 0| 0| 17.2| 24.3| No| 0.0| No| |2008-12-03| Albury| 12.9| 25.7| 0.0| NA| NA| WSW| 46| W| WSW| 19| 26| 38| 30| 1007.6| 1008.7| 0| 2| 21.0| 23.2| No| 0.0| No| |2008-12-04| Albury| 9.2| 28.0| 0.0| NA| NA| NE| 24| SE| E| 11| 9| 45| 16| 1017.6| 1012.8| 0| 0| 18.1| 26.5| No| 1.0| No| |2008-12-05| Albury| 17.5| 32.3| 1.0| NA| NA| W| 41| ENE| NW| 7| 20| 82| 33| 1010.8| 1006.0| 7| 8| 17.8| 29.7| No| 0.2| No| |2008-12-06| Albury| 14.6| 29.7| 0.2| NA| NA| WNW| 56| W| W| 19| 24| 55| 23| 1009.2| 1005.4| 0| 0| 20.6| 28.9| No| 0.0| No| |2008-12-07| Albury| 14.3| 25.0| 0.0| NA| NA| W| 50| SW| W| 20| 24| 49| 19| 1009.6| 1008.2| 1| 0| 18.1| 24.6| No| 0.0| No| |2008-12-08| Albury| 7.7| 26.7| 0.0| NA| NA| W| 35| SSE| W| 6| 17| 48| 19| 1013.4| 1010.1| 0| 0| 16.3| 25.5| No| 0.0| No| |2008-12-09| Albury| 9.7| 31.9| 0.0| NA| NA| NNW| 80| SE| NW| 7| 28| 42| 9| 1008.9| 1003.6| 0| 0| 18.3| 30.2| No| 1.4| Yes| |2008-12-10| Albury| 13.1| 30.1| 1.4| NA| NA| W| 28| S| SSE| 15| 11| 58| 27| 1007.0| 1005.7| 0| 0| 20.1| 28.2| Yes| 0.0| No| |2008-12-11| Albury| 13.4| 30.4| 0.0| NA| NA| N| 30| SSE| ESE| 17| 6| 48| 22| 1011.8| 1008.7| 0| 0| 20.4| 28.8| No| 2.2| Yes| |2008-12-12| Albury| 15.9| 21.7| 2.2| NA| NA| NNE| 31| NE| ENE| 15| 13| 89| 91| 1010.5| 1004.2| 8| 8| 15.9| 17.0| Yes| 15.6| Yes| |2008-12-13| Albury| 15.9| 18.6| 15.6| NA| NA| W| 61| NNW| NNW| 28| 28| 76| 93| 994.3| 993.0| 8| 8| 17.4| 15.8| Yes| 3.6| Yes| |2008-12-14| Albury| 12.6| 21.0| 3.6| NA| NA| SW| 44| W| SSW| 24| 20| 65| 43| 1001.2| 1001.8| 0| 7| 15.8| 19.8| Yes| 0.0| No| |2008-12-16| Albury| 9.8| 27.7| 0.0| NA| NA| WNW| 50| NA| WNW| 0| 22| 50| 28| 1013.4| 1010.3| 0| 0| 17.3| 26.2| NA| 0.0| No| |2008-12-17| Albury| 14.1| 20.9| 0.0| NA| NA| ENE| 22| SSW| E| 11| 9| 69| 82| 1012.2| 1010.4| 8| 1| 17.2| 18.1| No| 16.8| Yes| |2008-12-18| Albury| 13.5| 22.9| 16.8| NA| NA| W| 63| N| WNW| 6| 20| 80| 65| 1005.8| 1002.2| 8| 1| 18.0| 21.5| Yes| 10.6| Yes| |2008-12-19| Albury| 11.2| 22.5| 10.6| NA| NA| SSE| 43| WSW| SW| 24| 17| 47| 32| 1009.4| 1009.7| 0| 2| 15.5| 21.0| Yes| 0.0| No| |2008-12-20| Albury| 9.8| 25.6| 0.0| NA| NA| SSE| 26| SE| NNW| 17| 6| 45| 26| 1019.2| 1017.1| 0| 0| 15.8| 23.2| No| 0.0| No| |2008-12-21| Albury| 11.5| 29.3| 0.0| NA| NA| S| 24| SE| SE| 9| 9| 56| 28| 1019.3| 1014.8| 0| 0| 19.1| 27.3| No| 0.0| No| +----------+--------+-------+-------+--------+-----------+--------+-----------+-------------+----------+----------+------------+------------+-----------+-----------+-----------+-----------+--------+--------+-------+-------+---------+-------+------------+ only showing top 20 rows
Collecting all String Columns into an Array
%scala var StringfeatureCol = Array("Dates", "Location", "Evaporation", "Sunshine", "WindGustDir", "WindDir9am", "WindDir3pm", "RainToday", "RainTomorrow");
StringIndexer encodes a string column of labels to a column of label indices.
Example of StringIndexer
import org.apache.spark.ml.feature.StringIndexer val df = spark.createDataFrame( Seq((0, "a"), (1, "b"), (2, "c"), (3, "a"), (4, "a"), (5, "c")) ).toDF("id", "category") val indexer = new StringIndexer() .setInputCol("category") .setOutputCol("categoryIndex") val indexed = indexer.fit(df).transform(df) indexed.show() Output: +---+--------+ | id|category| +---+--------+ | 0| a| | 1| b| | 2| c| | 3| a| | 4| a| | 5| c| +---+--------+ +---+--------+-------------+ | id|category|categoryIndex| +---+--------+-------------+ | 0| a| 0.0| | 1| b| 2.0| | 2| c| 1.0| | 3| a| 0.0| | 4| a| 0.0| | 5| c| 1.0| +---+--------+-------------+
Define the Pipeline
A predictive model often requires multiple stages of feature preparation.
A pipeline consists of a series of transformer and estimator stages that typically prepare a DataFrame for modeling and then train a predictive model.
In this case, you will create a pipeline with stages:
- A StringIndexer estimator that converts string values to indexes for categorical features
- A VectorAssembler that combines categorical features into a single vector
%scala import org.apache.spark.ml.attribute.Attribute import org.apache.spark.ml.feature.{IndexToString, StringIndexer} import org.apache.spark.ml.{Pipeline, PipelineModel} val indexers = StringfeatureCol.map { colName => new StringIndexer().setInputCol(colName).setHandleInvalid("skip").setOutputCol(colName + "_indexed") } val pipeline = new Pipeline() .setStages(indexers) val AusFinalDF = pipeline.fit(AusDF).transform(AusDF)
Print Schema to view String Columns are converted in to equivalent Numerical Columns
%scala AusFinalDF.printSchema(); Output: root |-- Dates: string (nullable = true) |-- Location: string (nullable = true) |-- MinTemp: double (nullable = false) |-- MaxTemp: double (nullable = false) |-- Rainfall: double (nullable = false) |-- Evaporation: string (nullable = true) |-- Sunshine: string (nullable = true) |-- WindGustDir: string (nullable = true) |-- WindGustSpeed: integer (nullable = false) |-- WindDir9am: string (nullable = true) |-- WindDir3pm: string (nullable = true) |-- WindSpeed9am: integer (nullable = false) |-- WindSpeed3pm: integer (nullable = false) |-- Humidity9am: integer (nullable = false) |-- Humidity3pm: integer (nullable = false) |-- Pressure9am: double (nullable = false) |-- Pressure3pm: double (nullable = false) |-- Cloud9am: integer (nullable = false) |-- Cloud3pm: integer (nullable = false) |-- Temp9am: double (nullable = false) |-- Temp3pm: double (nullable = false) |-- RainToday: string (nullable = true) |-- RISK_MM: double (nullable = false) |-- RainTomorrow: string (nullable = true) |-- Dates_indexed: double (nullable = false) |-- Location_indexed: double (nullable = false) |-- Evaporation_indexed: double (nullable = false) |-- Sunshine_indexed: double (nullable = false) |-- WindGustDir_indexed: double (nullable = false) |-- WindDir9am_indexed: double (nullable = false) |-- WindDir3pm_indexed: double (nullable = false) |-- RainToday_indexed: double (nullable = false) |-- RainTomorrow_indexed: double (nullable = false)
Display Data
%scala AusFinalDF.show() Output: +----------+--------+-------+-------+--------+-----------+--------+-----------+-------------+----------+----------+------------+------------+-----------+-----------+-----------+-----------+--------+--------+-------+-------+---------+-------+------------+-------------+----------------+-------------------+----------------+-------------------+------------------+------------------+-----------------+--------------------+ | Dates|Location|MinTemp|MaxTemp|Rainfall|Evaporation|Sunshine|WindGustDir|WindGustSpeed|WindDir9am|WindDir3pm|WindSpeed9am|WindSpeed3pm|Humidity9am|Humidity3pm|Pressure9am|Pressure3pm|Cloud9am|Cloud3pm|Temp9am|Temp3pm|RainToday|RISK_MM|RainTomorrow|Dates_indexed|Location_indexed|Evaporation_indexed|Sunshine_indexed|WindGustDir_indexed|WindDir9am_indexed|WindDir3pm_indexed|RainToday_indexed|RainTomorrow_indexed| +----------+--------+-------+-------+--------+-----------+--------+-----------+-------------+----------+----------+------------+------------+-----------+-----------+-----------+-----------+--------+--------+-------+-------+---------+-------+------------+-------------+----------------+-------------------+----------------+-------------------+------------------+------------------+-----------------+--------------------+ |2008-12-01| Albury| 13.4| 22.9| 0.6| NA| NA| W| 44| W| WNW| 20| 24| 71| 22| 1007.7| 1007.1| 8| 0| 16.9| 21.8| No| 0.0| No| 3035.0| 14.0| 0.0| 0.0| 0.0| 7.0| 7.0| 0.0| 0.0| |2008-12-02| Albury| 7.4| 25.1| 0.0| NA| NA| WNW| 44| NNW| WSW| 4| 22| 44| 25| 1010.6| 1007.8| 0| 0| 17.2| 24.3| No| 0.0| No| 3036.0| 14.0| 0.0| 0.0| 10.0| 10.0| 3.0| 0.0| 0.0| |2008-12-03| Albury| 12.9| 25.7| 0.0| NA| NA| WSW| 46| W| WSW| 19| 26| 38| 30| 1007.6| 1008.7| 0| 2| 21.0| 23.2| No| 0.0| No| 3009.0| 14.0| 0.0| 0.0| 7.0| 7.0| 3.0| 0.0| 0.0| |2008-12-04| Albury| 9.2| 28.0| 0.0| NA| NA| NE| 24| SE| E| 11| 9| 45| 16| 1017.6| 1012.8| 0| 0| 18.1| 26.5| No| 1.0| No| 3010.0| 14.0| 0.0| 0.0| 14.0| 2.0| 10.0| 0.0| 0.0| |2008-12-05| Albury| 17.5| 32.3| 1.0| NA| NA| W| 41| ENE| NW| 7| 20| 82| 33| 1010.8| 1006.0| 7| 8| 17.8| 29.7| No| 0.2| No| 3011.0| 14.0| 0.0| 0.0| 0.0| 11.0| 8.0| 0.0| 0.0| |2008-12-06| Albury| 14.6| 29.7| 0.2| NA| NA| WNW| 56| W| W| 19| 24| 55| 23| 1009.2| 1005.4| 0| 0| 20.6| 28.9| No| 0.0| No| 3012.0| 14.0| 0.0| 0.0| 10.0| 7.0| 1.0| 0.0| 0.0| |2008-12-07| Albury| 14.3| 25.0| 0.0| NA| NA| W| 50| SW| W| 20| 24| 49| 19| 1009.6| 1008.2| 1| 0| 18.1| 24.6| No| 0.0| No| 3013.0| 14.0| 0.0| 0.0| 0.0| 8.0| 1.0| 0.0| 0.0| |2008-12-08| Albury| 7.7| 26.7| 0.0| NA| NA| W| 35| SSE| W| 6| 17| 48| 19| 1013.4| 1010.1| 0| 0| 16.3| 25.5| No| 0.0| No| 3014.0| 14.0| 0.0| 0.0| 0.0| 4.0| 1.0| 0.0| 0.0| |2008-12-09| Albury| 9.7| 31.9| 0.0| NA| NA| NNW| 80| SE| NW| 7| 28| 42| 9| 1008.9| 1003.6| 0| 0| 18.3| 30.2| No| 1.4| Yes| 3037.0| 14.0| 0.0| 0.0| 15.0| 2.0| 8.0| 0.0| 1.0| |2008-12-10| Albury| 13.1| 30.1| 1.4| NA| NA| W| 28| S| SSE| 15| 11| 58| 27| 1007.0| 1005.7| 0| 0| 20.1| 28.2| Yes| 0.0| No| 3038.0| 14.0| 0.0| 0.0| 0.0| 6.0| 5.0| 1.0| 0.0| |2008-12-11| Albury| 13.4| 30.4| 0.0| NA| NA| N| 30| SSE| ESE| 17| 6| 48| 22| 1011.8| 1008.7| 0| 0| 20.4| 28.8| No| 2.2| Yes| 3015.0| 14.0| 0.0| 0.0| 4.0| 4.0| 9.0| 0.0| 1.0| |2008-12-12| Albury| 15.9| 21.7| 2.2| NA| NA| NNE| 31| NE| ENE| 15| 13| 89| 91| 1010.5| 1004.2| 8| 8| 15.9| 17.0| Yes| 15.6| Yes| 3016.0| 14.0| 0.0| 0.0| 16.0| 13.0| 14.0| 1.0| 1.0| |2008-12-13| Albury| 15.9| 18.6| 15.6| NA| NA| W| 61| NNW| NNW| 28| 28| 76| 93| 994.3| 993.0| 8| 8| 17.4| 15.8| Yes| 3.6| Yes| 3017.0| 14.0| 0.0| 0.0| 0.0| 10.0| 13.0| 1.0| 1.0| |2008-12-14| Albury| 12.6| 21.0| 3.6| NA| NA| SW| 44| W| SSW| 24| 20| 65| 43| 1001.2| 1001.8| 0| 7| 15.8| 19.8| Yes| 0.0| No| 3018.0| 14.0| 0.0| 0.0| 8.0| 7.0| 12.0| 1.0| 0.0| |2008-12-16| Albury| 9.8| 27.7| 0.0| NA| NA| WNW| 50| NA| WNW| 0| 22| 50| 28| 1013.4| 1010.3| 0| 0| 17.3| 26.2| NA| 0.0| No| 3019.0| 14.0| 0.0| 0.0| 10.0| 1.0| 7.0| 2.0| 0.0| |2008-12-17| Albury| 14.1| 20.9| 0.0| NA| NA| ENE| 22| SSW| E| 11| 9| 69| 82| 1012.2| 1010.4| 8| 1| 17.2| 18.1| No| 16.8| Yes| 3020.0| 14.0| 0.0| 0.0| 12.0| 14.0| 10.0| 0.0| 1.0| |2008-12-18| Albury| 13.5| 22.9| 16.8| NA| NA| W| 63| N| WNW| 6| 20| 80| 65| 1005.8| 1002.2| 8| 1| 18.0| 21.5| Yes| 10.6| Yes| 3021.0| 14.0| 0.0| 0.0| 0.0| 0.0| 7.0| 1.0| 1.0| |2008-12-19| Albury| 11.2| 22.5| 10.6| NA| NA| SSE| 43| WSW| SW| 24| 17| 47| 32| 1009.4| 1009.7| 0| 2| 15.5| 21.0| Yes| 0.0| No| 3022.0| 14.0| 0.0| 0.0| 5.0| 16.0| 4.0| 1.0| 0.0| |2008-12-20| Albury| 9.8| 25.6| 0.0| NA| NA| SSE| 26| SE| NNW| 17| 6| 45| 26| 1019.2| 1017.1| 0| 0| 15.8| 23.2| No| 0.0| No| 3023.0| 14.0| 0.0| 0.0| 5.0| 2.0| 13.0| 0.0| 0.0| |2008-12-21| Albury| 11.5| 29.3| 0.0| NA| NA| S| 24| SE| SE| 9| 9| 56| 28| 1019.3| 1014.8| 0| 0| 19.1| 27.3| No| 0.0| No| 3024.0| 14.0| 0.0| 0.0| 6.0| 2.0| 0.0| 0.0| 0.0| +----------+--------+-------+-------+--------+-----------+--------+-----------+-------------+----------+----------+------------+------------+-----------+-----------+-----------+-----------+--------+--------+-------+-------+---------+-------+------------+-------------+----------------+-------------------+----------------+-------------------+------------------+------------------+-----------------+--------------------+ only showing top 20 rows
Split the Data
It is common practice when building machine learning models to split the source data, using some of it to train the model and reserving some to test the trained model. In this project, you will use 70% of the data for training, and reserve 30% for testing.
%scala val splits = AusFinalDF.randomSplit(Array(0.7, 0.3)) val train = splits(0) val test = splits(1) val train_rows = train.count() val test_rows = test.count() println("Training Rows: " + train_rows + " Testing Rows: " + test_rows)
Prepare the Training Data
To train the Classification model, you need a training data set that includes a vector of numeric features, and a label column. In this project, you will use the VectorAssembler class to transform the feature columns into a vector, and then rename the RainTomorrow column to the label.
VectorAssembler()
VectorAssembler(): is a transformer that combines a given list of columns into a single vector column. It is useful for combining raw features and features generated by different feature transformers into a single feature vector, in order to train ML models like logistic regression.
VectorAssembler accepts the following input column types: all numeric types, boolean type, and vector type.
In each row, the values of the input columns will be concatenated into a vector in the specified order.
%scala import org.apache.spark.ml.feature.VectorAssembler val assembler = new VectorAssembler().setInputCols(Array("Dates_indexed", "Location_indexed", "MinTemp", "MaxTemp", "Rainfall", "Evaporation_indexed", "Sunshine_indexed", "WindGustDir_indexed", "WindGustSpeed", "WindDir9am_indexed", "WindDir3pm_indexed", "WindSpeed9am", "WindSpeed3pm", "Humidity9am", "Humidity3pm", "Pressure9am", "Pressure3pm", "Cloud9am", "Cloud3pm", "Temp9am", "Temp3pm", "RainToday_indexed", "RISK_MM")).setOutputCol("features") val training = assembler.transform(train).select($"features", $"RainTomorrow_indexed".alias("label")) training.show(false) Output: +------------------------------------------------------------------------------------------------------------------+-----+ |features |label| +------------------------------------------------------------------------------------------------------------------+-----+ |[3193.0,1.0,19.5,22.4,15.6,28.0,1.0,1.0,0.0,6.0,12.0,17.0,20.0,92.0,84.0,1017.6,1017.4,8.0,8.0,20.7,20.9,1.0,6.0] |1.0 | |[3194.0,1.0,19.5,25.6,6.0,9.0,104.0,1.0,0.0,7.0,10.0,9.0,13.0,83.0,73.0,1017.9,1016.4,7.0,7.0,22.4,24.8,1.0,6.6] |1.0 | |[3195.0,1.0,21.6,24.5,6.6,6.0,55.0,1.0,0.0,12.0,9.0,17.0,2.0,88.0,86.0,1016.7,1015.6,7.0,8.0,23.5,23.0,1.0,18.8] |1.0 | |[3197.0,1.0,19.7,25.7,77.4,0.0,1.0,1.0,0.0,9.0,1.0,11.0,6.0,88.0,74.0,1008.3,1004.8,8.0,8.0,22.5,25.5,1.0,1.6] |1.0 | |[3198.0,1.0,20.2,27.2,1.6,5.0,39.0,1.0,0.0,7.0,14.0,9.0,22.0,69.0,62.0,1002.7,998.6,6.0,6.0,23.8,26.0,1.0,6.2] |1.0 | |[3199.0,1.0,18.6,26.3,6.2,24.0,80.0,1.0,0.0,7.0,2.0,15.0,15.0,75.0,80.0,999.0,1000.3,4.0,7.0,21.7,22.3,1.0,27.6] |1.0 | |[3200.0,1.0,17.2,22.3,27.6,27.0,120.0,1.0,0.0,6.0,0.0,7.0,15.0,77.0,61.0,1008.3,1007.4,7.0,8.0,18.9,21.1,1.0,12.6]|1.0 | |[3201.0,1.0,16.4,20.8,12.6,20.0,96.0,1.0,0.0,14.0,1.0,19.0,9.0,92.0,91.0,1006.4,1007.6,7.0,7.0,17.1,16.5,1.0,8.8] |1.0 | |[3206.0,1.0,16.9,24.3,3.0,10.0,30.0,1.0,0.0,14.0,5.0,15.0,19.0,81.0,51.0,1017.2,1016.5,7.0,1.0,18.4,23.3,1.0,0.0] |0.0 | |[3207.0,1.0,18.3,24.4,0.0,28.0,34.0,1.0,0.0,8.0,5.0,19.0,22.0,80.0,56.0,1020.2,1018.9,7.0,3.0,19.5,23.8,0.0,0.0] |0.0 | |[3209.0,1.0,19.5,24.8,0.0,36.0,93.0,1.0,0.0,2.0,9.0,13.0,24.0,63.0,62.0,1024.7,1023.1,7.0,7.0,21.8,23.4,0.0,0.0] |0.0 | |[3213.0,1.0,20.6,26.8,0.0,32.0,74.0,1.0,0.0,15.0,0.0,4.0,15.0,86.0,62.0,1011.4,1009.7,7.0,7.0,22.1,24.9,0.0,0.4] |0.0 | |[3214.0,1.0,19.1,25.4,0.4,21.0,21.0,1.0,0.0,15.0,10.0,11.0,28.0,77.0,66.0,1009.2,1003.1,4.0,1.0,22.3,24.9,0.0,0.0]|0.0 | |[3215.0,1.0,20.5,26.0,0.0,26.0,54.0,1.0,0.0,16.0,9.0,20.0,24.0,38.0,45.0,1007.4,1007.0,1.0,1.0,22.5,24.0,0.0,0.0] |0.0 | |[3218.0,1.0,19.3,26.2,0.0,13.0,27.0,1.0,0.0,16.0,10.0,2.0,28.0,73.0,65.0,1018.3,1015.6,1.0,6.0,23.0,25.8,0.0,22.2]|1.0 | |[3219.0,1.0,19.0,29.0,22.2,45.0,109.0,1.0,0.0,1.0,15.0,0.0,15.0,82.0,52.0,1014.1,1010.6,7.0,7.0,21.3,24.7,1.0,4.2]|1.0 | |[3221.0,1.0,14.7,20.6,33.0,31.0,29.0,1.0,0.0,8.0,2.0,24.0,28.0,65.0,51.0,1015.3,1015.6,5.0,3.0,17.1,19.9,1.0,0.2] |0.0 | |[3224.0,1.0,15.0,23.1,0.2,5.0,16.0,1.0,0.0,7.0,14.0,15.0,19.0,78.0,57.0,1025.8,1022.5,1.0,1.0,18.2,22.5,0.0,0.0] |0.0 | |[3227.0,1.0,19.6,26.0,0.0,15.0,11.0,1.0,0.0,0.0,14.0,11.0,19.0,67.0,65.0,1024.2,1019.8,1.0,1.0,22.1,24.7,0.0,0.0] |0.0 | |[3228.0,1.0,19.9,24.9,0.0,36.0,91.0,1.0,0.0,14.0,5.0,13.0,24.0,83.0,75.0,1021.3,1020.7,5.0,7.0,22.9,23.3,0.0,17.4]|1.0 | +------------------------------------------------------------------------------------------------------------------+-----+ only showing top 20 rows
Train a Classification Model
Next, you need to train a Classification model using the training data. To do this, create an instance of the LogisticRegression algorithm you want to use and use its fit method to train a model based on the training DataFrame. In this project, you will use a Logistic Regression Classifier algorithm – though you can use the same technique for any of the regression algorithms supported in the spark.ml API
%scala import org.apache.spark.ml.classification.LogisticRegression val lr = new LogisticRegression().setLabelCol("label").setFeaturesCol("features").setMaxIter(10).setRegParam(0.3) val model = lr.fit(training) println ("Model trained!")
Prepare the Testing Data
Now that you have a trained model, you can test it using the testing data you reserved previously. First, you need to prepare the testing data in the same way as you did the training data by transforming the feature columns into a vector. This time you’ll rename the RainTomorrow column to trueLabel.
%scala val testing = assembler.transform(test).select($"features", $"RainTomorrow_indexed".alias("trueLabel")) testing.show(false) Output: +------------------------------------------------------------------------------------------------------------------+---------+ |features |trueLabel| +------------------------------------------------------------------------------------------------------------------+---------+ |[3196.0,1.0,20.2,22.8,18.8,3.0,1.0,1.0,0.0,9.0,10.0,22.0,20.0,83.0,90.0,1014.2,1011.8,8.0,8.0,21.4,20.9,1.0,77.4] |1.0 | |[3202.0,1.0,14.6,24.2,8.8,21.0,14.0,1.0,0.0,7.0,5.0,11.0,20.0,80.0,53.0,1014.0,1013.4,4.0,2.0,17.2,23.3,1.0,0.0] |0.0 | |[3203.0,1.0,16.4,23.9,0.0,30.0,32.0,1.0,0.0,15.0,10.0,9.0,26.0,78.0,53.0,1017.6,1015.3,7.0,8.0,18.9,23.7,0.0,0.0] |0.0 | |[3204.0,1.0,18.9,27.3,0.0,32.0,78.0,1.0,0.0,0.0,11.0,7.0,24.0,68.0,67.0,1010.7,1007.5,7.0,7.0,22.9,24.7,0.0,14.4] |1.0 | |[3205.0,1.0,18.4,22.8,14.4,31.0,109.0,1.0,0.0,6.0,2.0,24.0,30.0,87.0,70.0,1009.2,1011.7,8.0,7.0,20.9,21.0,1.0,3.0]|1.0 | |[3208.0,1.0,16.7,24.1,0.0,28.0,26.0,1.0,0.0,15.0,5.0,11.0,26.0,77.0,52.0,1023.0,1022.6,7.0,6.0,19.8,23.3,0.0,0.0] |0.0 | |[3210.0,1.0,18.6,25.3,0.0,29.0,91.0,1.0,0.0,4.0,14.0,13.0,24.0,70.0,59.0,1021.7,1019.4,7.0,6.0,21.2,23.8,0.0,0.0] |0.0 | |[3211.0,1.0,19.0,24.8,0.0,24.0,49.0,1.0,0.0,15.0,10.0,7.0,17.0,79.0,65.0,1018.0,1015.4,7.0,7.0,20.8,23.4,0.0,0.0] |0.0 | |[3212.0,1.0,18.3,26.4,0.0,22.0,10.0,1.0,0.0,7.0,10.0,13.0,22.0,80.0,60.0,1013.4,1010.8,3.0,1.0,21.3,25.2,0.0,0.0] |0.0 | |[3216.0,1.0,16.5,28.3,0.0,41.0,41.0,1.0,0.0,15.0,9.0,11.0,24.0,48.0,40.0,1007.9,1008.0,0.0,0.0,21.2,26.1,0.0,0.0] |0.0 | |[3217.0,1.0,20.5,24.2,0.0,41.0,130.0,1.0,0.0,3.0,10.0,15.0,20.0,58.0,54.0,1017.5,1016.9,7.0,7.0,21.6,23.5,0.0,0.0]|0.0 | |[3220.0,1.0,17.9,21.4,4.2,29.0,1.0,1.0,0.0,6.0,9.0,20.0,9.0,77.0,88.0,1010.0,1010.1,8.0,8.0,20.4,19.2,1.0,33.0] |1.0 | |[3222.0,1.0,12.8,22.0,0.2,28.0,25.0,1.0,0.0,7.0,5.0,19.0,31.0,55.0,46.0,1020.9,1020.6,0.0,1.0,15.9,21.2,0.0,5.4] |1.0 | |[3223.0,1.0,14.2,23.4,5.4,30.0,45.0,1.0,0.0,7.0,0.0,13.0,17.0,93.0,54.0,1026.2,1025.2,7.0,2.0,15.1,22.4,1.0,0.2] |0.0 | |[3225.0,1.0,15.5,25.4,0.0,32.0,25.0,1.0,0.0,15.0,10.0,9.0,19.0,77.0,57.0,1020.5,1018.6,0.0,2.0,18.7,23.8,0.0,0.0] |0.0 | |[3226.0,1.0,18.7,25.8,0.0,19.0,30.0,1.0,0.0,6.0,9.0,19.0,22.0,72.0,64.0,1024.8,1024.2,6.0,3.0,22.6,24.1,0.0,0.0] |0.0 | |[3229.0,1.0,16.5,24.8,17.4,19.0,19.0,1.0,0.0,15.0,10.0,9.0,22.0,88.0,71.0,1023.4,1022.3,1.0,2.0,18.8,21.5,1.0,0.0]|0.0 | |[3237.0,1.0,19.7,25.8,0.0,23.0,14.0,1.0,0.0,7.0,10.0,7.0,24.0,81.0,69.0,1024.6,1022.6,2.0,2.0,22.8,24.7,0.0,0.0] |0.0 | |[3239.0,1.0,19.0,26.7,0.0,2.0,20.0,1.0,0.0,0.0,14.0,11.0,20.0,73.0,58.0,1023.3,1020.0,0.0,2.0,22.2,25.6,0.0,0.0] |0.0 | |[3240.0,1.0,19.9,26.9,0.0,26.0,58.0,1.0,0.0,4.0,10.0,7.0,13.0,70.0,60.0,1021.2,1019.4,5.0,7.0,23.9,24.9,0.0,0.0] |0.0 | +------------------------------------------------------------------------------------------------------------------+---------+ only showing top 20 rows
Test the Model
Now you’re ready to use the transform method of the model to generate some predictions. You can use this approach to predict the RainTomorrow; but in this case, you are using the test data which includes a known true label value, so you can compare the RainTomorrow
%scala val prediction = model.transform(testing) val predicted = prediction.select("features", "prediction", "trueLabel") predicted.show(100) Output: +--------------------+----------+---------+ | features|prediction|trueLabel| +--------------------+----------+---------+ |[3196.0,1.0,20.2,...| 1.0| 1.0| |[3202.0,1.0,14.6,...| 0.0| 0.0| |[3203.0,1.0,16.4,...| 0.0| 0.0| |[3204.0,1.0,18.9,...| 0.0| 1.0| |[3205.0,1.0,18.4,...| 0.0| 1.0| |[3208.0,1.0,16.7,...| 0.0| 0.0| |[3210.0,1.0,18.6,...| 0.0| 0.0| |[3211.0,1.0,19.0,...| 0.0| 0.0| |[3212.0,1.0,18.3,...| 0.0| 0.0| |[3216.0,1.0,16.5,...| 0.0| 0.0| |[3217.0,1.0,20.5,...| 0.0| 0.0| |[3220.0,1.0,17.9,...| 1.0| 1.0| |[3222.0,1.0,12.8,...| 0.0| 1.0| |[3223.0,1.0,14.2,...| 0.0| 0.0| |[3225.0,1.0,15.5,...| 0.0| 0.0| |[3226.0,1.0,18.7,...| 0.0| 0.0| |[3229.0,1.0,16.5,...| 0.0| 0.0| |[3237.0,1.0,19.7,...| 0.0| 0.0| |[3239.0,1.0,19.0,...| 0.0| 0.0| |[3240.0,1.0,19.9,...| 0.0| 0.0| |[3245.0,1.0,18.5,...| 0.0| 0.0| |[3246.0,1.0,19.8,...| 0.0| 1.0| |[3247.0,1.0,17.0,...| 0.0| 0.0| |[3248.0,1.0,15.0,...| 0.0| 0.0| |[3250.0,1.0,14.4,...| 0.0| 0.0| |[3251.0,1.0,13.1,...| 0.0| 0.0| |[3253.0,1.0,13.7,...| 0.0| 0.0| |[3260.0,1.0,14.6,...| 1.0| 1.0| |[3261.0,1.0,15.6,...| 0.0| 1.0| |[3262.0,1.0,13.8,...| 0.0| 0.0| |[3263.0,1.0,14.1,...| 0.0| 0.0| |[3264.0,1.0,14.0,...| 0.0| 0.0| |[3270.0,1.0,15.3,...| 0.0| 1.0| |[3271.0,1.0,15.9,...| 0.0| 1.0| |[3275.0,1.0,14.7,...| 1.0| 1.0| |[3276.0,1.0,14.7,...| 0.0| 1.0| |[3280.0,1.0,11.9,...| 0.0| 0.0| |[3282.0,1.0,8.4,1...| 0.0| 0.0| |[3283.0,1.0,11.6,...| 0.0| 0.0| |[3287.0,1.0,9.4,2...| 0.0| 0.0| |[3290.0,1.0,11.0,...| 0.0| 0.0| |[3294.0,1.0,12.4,...| 0.0| 0.0| |[3296.0,1.0,11.2,...| 0.0| 0.0| |[3297.0,1.0,12.0,...| 0.0| 1.0| |[3298.0,1.0,13.8,...| 0.0| 0.0| |[3300.0,1.0,8.8,1...| 0.0| 0.0| |[3304.0,1.0,11.9,...| 0.0| 0.0| |[3306.0,1.0,11.4,...| 0.0| 0.0| |[3308.0,1.0,10.9,...| 0.0| 0.0| |[3309.0,1.0,11.5,...| 0.0| 0.0| |[3310.0,1.0,11.0,...| 0.0| 0.0| |[3311.0,1.0,11.0,...| 0.0| 0.0| |[3315.0,1.0,14.3,...| 0.0| 1.0| |[3319.0,1.0,14.7,...| 0.0| 0.0| |[3320.0,1.0,11.7,...| 0.0| 1.0| |[3322.0,1.0,13.0,...| 0.0| 1.0| |[3323.0,1.0,12.4,...| 0.0| 0.0| |[3324.0,1.0,13.2,...| 0.0| 1.0| |[3338.0,1.0,9.9,1...| 0.0| 0.0| |[3040.0,1.0,10.3,...| 0.0| 0.0| |[3043.0,1.0,8.2,1...| 1.0| 1.0| |[3045.0,1.0,9.7,1...| 0.0| 0.0| |[3047.0,1.0,6.9,1...| 0.0| 1.0| |[3176.0,1.0,5.7,1...| 0.0| 0.0| |[3054.0,1.0,10.8,...| 0.0| 1.0| |[3056.0,1.0,7.2,1...| 0.0| 0.0| |[3058.0,1.0,6.9,1...| 0.0| 1.0| |[3059.0,1.0,6.2,1...| 0.0| 0.0| |[3063.0,1.0,6.0,1...| 0.0| 0.0| |[3064.0,1.0,6.0,2...| 0.0| 0.0| |[3070.0,1.0,6.7,1...| 0.0| 0.0| |[3184.0,1.0,9.4,1...| 0.0| 0.0| |[3075.0,1.0,10.5,...| 0.0| 0.0| |[3076.0,1.0,6.9,1...| 0.0| 0.0| |[3078.0,1.0,5.3,1...| 0.0| 0.0| |[3082.0,1.0,7.3,1...| 0.0| 0.0| |[3085.0,1.0,8.5,2...| 0.0| 0.0| |[3087.0,1.0,8.8,1...| 0.0| 0.0| |[3091.0,1.0,8.7,2...| 0.0| 0.0| |[3095.0,1.0,9.5,1...| 0.0| 0.0| |[3096.0,1.0,9.4,1...| 0.0| 0.0| |[3102.0,1.0,16.9,...| 0.0| 0.0| |[3109.0,1.0,13.2,...| 0.0| 1.0| |[3110.0,1.0,15.6,...| 0.0| 1.0| |[3116.0,1.0,13.5,...| 0.0| 0.0| |[3120.0,1.0,14.9,...| 0.0| 0.0| |[3126.0,1.0,14.2,...| 0.0| 0.0| |[3130.0,1.0,11.8,...| 0.0| 0.0| |[3134.0,1.0,17.1,...| 0.0| 1.0| |[3136.0,1.0,9.9,1...| 1.0| 1.0| |[3143.0,1.0,16.8,...| 0.0| 1.0| |[3192.0,1.0,16.9,...| 0.0| 0.0| |[3145.0,1.0,17.5,...| 0.0| 1.0| |[3148.0,1.0,14.5,...| 0.0| 0.0| |[3150.0,1.0,16.0,...| 0.0| 0.0| |[3152.0,1.0,16.6,...| 0.0| 0.0| |[3158.0,1.0,18.5,...| 0.0| 1.0| |[3160.0,1.0,15.5,...| 0.0| 0.0| |[3161.0,1.0,16.2,...| 0.0| 1.0| |[3163.0,1.0,17.3,...| 0.0| 0.0| |[3164.0,1.0,16.7,...| 0.0| 0.0| |[3168.0,1.0,15.5,...| 0.0| 0.0| |[3171.0,1.0,19.5,...| 0.0| 1.0| |(23,[0,1,2,3,7,8,...| 0.0| 0.0| |[3035.0,1.0,17.6,...| 0.0| 0.0| |[3036.0,14.0,7.4,...| 0.0| 0.0| |[3036.0,1.0,16.6,...| 0.0| 0.0| |(23,[0,1,3,7,8,9,...| 0.0| 0.0| |[3011.0,14.0,17.5...| 0.0| 0.0| |[3012.0,1.0,20.8,...| 0.0| 0.0| |[3012.0,27.0,18.6...| 0.0| 0.0| |[3014.0,1.0,18.2,...| 0.0| 1.0| |[3037.0,27.0,17.1...| 0.0| 0.0| |(23,[0,1,2,3,7,9,...| 0.0| 0.0| |[3038.0,1.0,19.5,...| 0.0| 1.0| |[3015.0,31.0,0.0,...| 0.0| 1.0| |[3015.0,27.0,15.6...| 0.0| 1.0| |[3016.0,31.0,15.5...| 1.0| 1.0| |[3017.0,31.0,15.8...| 0.0| 0.0| |(23,[0,1,2,3,7,8,...| 0.0| 0.0| |[3039.0,1.0,16.7,...| 0.0| 0.0| |[3019.0,14.0,9.8,...| 0.0| 0.0| |[3020.0,14.0,14.1...| 0.0| 1.0| |(23,[0,1,2,3,7,9,...| 0.0| 0.0| |(23,[0,1,2,3,7,8,...| 0.0| 0.0| |[3020.0,1.0,17.7,...| 0.0| 0.0| |[3021.0,14.0,13.5...| 0.0| 1.0| |[3023.0,14.0,9.8,...| 0.0| 0.0| |(23,[0,1,2,3,7,9,...| 0.0| 0.0| |[3024.0,14.0,11.5...| 0.0| 0.0| |[3024.0,27.0,12.8...| 0.0| 0.0| |(23,[0,1,2,3,7,8,...| 0.0| 0.0| |[3026.0,14.0,20.5...| 0.0| 0.0| |[3026.0,32.0,16.0...| 0.0| 1.0| |[3027.0,31.0,18.3...| 0.0| 0.0| |[3027.0,27.0,17.8...| 0.0| 0.0| |[3028.0,31.0,17.6...| 0.0| 0.0| |[3029.0,14.0,16.2...| 0.0| 0.0| |[3030.0,14.0,16.9...| 0.0| 0.0| |[3030.0,1.0,20.5,...| 0.0| 1.0| |[3032.0,1.0,20.7,...| 0.0| 1.0| |[3034.0,27.0,18.4...| 0.0| 0.0| |[2110.0,26.0,17.9...| 0.0| 0.0| |(23,[0,1,2,3,7,10...| 0.0| 0.0| |[2110.0,18.0,17.7...| 0.0| 0.0| |[2110.0,27.0,17.2...| 0.0| 0.0| |[2111.0,14.0,9.6,...| 0.0| 0.0| |[2111.0,29.0,8.9,...| 0.0| 0.0| |[2111.0,27.0,17.7...| 0.0| 0.0| |[2112.0,34.0,20.0...| 0.0| 0.0| |[2112.0,30.0,22.3...| 0.0| 1.0| |[2112.0,29.0,11.0...| 0.0| 0.0| |[2113.0,34.0,14.8...| 0.0| 0.0| |[2113.0,41.0,16.4...| 0.0| 0.0| |(23,[0,1,2,3,7,9,...| 0.0| 0.0| |[2113.0,38.0,19.7...| 0.0| 0.0| |(23,[0,1,2,3,7,8,...| 0.0| 0.0| |(23,[0,1,2,3,7,9,...| 0.0| 0.0| |[2114.0,44.0,14.8...| 0.0| 0.0| |[2115.0,14.0,13.7...| 0.0| 0.0| |[2115.0,34.0,19.8...| 0.0| 0.0| |[2115.0,41.0,21.3...| 0.0| 0.0| |[2115.0,38.0,19.8...| 0.0| 0.0| |[2115.0,30.0,20.4...| 0.0| 0.0| |[2115.0,1.0,20.2,...| 0.0| 0.0| |[2115.0,18.0,19.9...| 0.0| 0.0| |[2690.0,30.0,20.2...| 0.0| 0.0| |[2690.0,44.0,19.7...| 0.0| 0.0| |[2116.0,14.0,14.0...| 0.0| 0.0| |[2116.0,39.0,18.9...| 0.0| 0.0| |[2116.0,34.0,22.7...| 0.0| 1.0| |[2116.0,32.0,21.2...| 0.0| 1.0| |(23,[0,1,2,3,7,8,...| 0.0| 1.0| |[2117.0,41.0,20.4...| 0.0| 1.0| |[2117.0,32.0,16.8...| 0.0| 0.0| |[2117.0,30.0,20.9...| 1.0| 1.0| |[2117.0,31.0,15.3...| 0.0| 0.0| |[2117.0,36.0,14.2...| 0.0| 0.0| |[2118.0,34.0,16.9...| 0.0| 0.0| |[2118.0,41.0,16.0...| 0.0| 1.0| |(23,[0,1,2,3,7,8,...| 0.0| 0.0| |[2118.0,18.0,16.3...| 0.0| 0.0| |[2118.0,29.0,15.9...| 0.0| 0.0| |[2118.0,44.0,13.3...| 0.0| 0.0| |[2119.0,41.0,18.2...| 0.0| 0.0| |[2119.0,18.0,18.0...| 0.0| 0.0| +--------------------+----------+---------+
Evaluating a Model (We got 87% Accuracy)
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator val evaluator = new BinaryClassificationEvaluator().setLabelCol("trueLabel").setRawPredictionCol("rawPrediction").setMetricName("areaUnderROC") val auc = evaluator.evaluate(prediction) println("AUC = " + (auc)) Output: AUC = 0.8799730748884093 import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator evaluator: org.apache.spark.ml.evaluation.BinaryClassificationEvaluator = BinaryClassificationEvaluator: uid=binEval_b904f3dfc337, metricName=areaUnderROC, numBins=1000 auc: Double = 0.8799730748884093