Machine Learning Project Predicting the age of abalone from physical measurements.
Abalone is a common name for any of a group of small to very large sea snails, marine gastropod molluscs in the family Haliotidae. Other common names are ear shells, sea ears, and muttonfish or muttonshells in Australia, ormer in the UK, perlemoen in South Africa, and paua in New Zealand. Abalone are marine snails.
Problem Statement or Business Problem
Predict the age of abalone from physical measurements
Predicting the age of abalone from physical measurements. The age of abalone is determined by cutting the shell through the cone, staining it, and counting the number of rings through a microscope a boring and time consuming task. Other measurements, which are easier to obtain, are used to predict the age.
Given is the attribute name, attribute type, the measurement unit and a brief description. The number of rings is the value to predict: either as a continuous value or as a classification problem.
Attribute Information or Dataset Details:
Name / Data Type / Measurement Unit / Description
- Sex / nominal / — / M, F, and I (infant)
- Length / continuous / mm / Longest shell measurement
- Diameter / continuous / mm / perpendicular to length
- Height / continuous / mm / with meat in shell
- Whole weight / continuous / grams / whole abalone
- Shucked weight / continuous / grams / weight of meat
- Viscera weight / continuous / grams / gut weight (after bleeding)
- Shell weight / continuous / grams / after being dried
- Rings / integer / — / +1.5 gives the age in years
Technology Used
- Apache Spark
- Spark SQL
- Apache Spark MLLib
- Scala
- DataFrame-based API
- Databricks Notebook
Challenges
Process .data file (ie file with .data as Extensions) with user define a schema for data
Convert String data to Numeric format so we can process the data in Apache Spark ML Library.
Introduction
Welcome to this project on predict the age of abalone from physical measurements in Apache Spark Machine Learning using Databricks platform community edition server which allows you to execute your spark code, free of cost on their server just by registering through email id.
In this project we explore Apache Spark and Machine Learning on the Databricks platform.
I am a firm believer that the best way to learn is by doing. That’s why I haven’t included any purely theoretical lectures in this tutorial: you will learn everything on the way and be able to put it into practice straight away. Seeing the way each feature works will help you learn Apache Spark machine learning thoroughly by heart.
We’re going to look at how to set up a Spark Cluster and get started with that. And we’ll look at how we can then use that Spark Cluster to take data coming into that Spark Cluster, process that data using a Machine Learning model, and generate some sort of output in the form of a prediction. That’s pretty much what we’re going to learn about predictive model.
In this project we will be Predict the age of abalone from physical measurements using Linear Regression algorithm.
We will learn:
- Preparing the Data for Processing.
- Basics flow of data in Apache Spark, loading data, and working with data, this course shows you how Apache Spark is perfect for Machine Learning job.
- Learn basics of Databricks notebook by enrolling into Free Community Edition Server
- Define the Machine Learning Pipeline
- Train a Machine Learning Model
- Testing a Machine Learning Model
- Evaluating a Machine Learning Model (i.e. Examine the Predicted and Actual Values)
The Goal is to provide you with practical tools that will be beneficial for you in the future. While doing that, you’ll develop a model with a real use opportunity.
I am really excited you are here, I hope you are going to follow all the way to the end of the Project. It is fairly straight forward fairly easy to follow through the article we will show you step by step each line of code & we will explain what it does and why we are doing it.
Free Account creation in Databricks
Creating a Spark Cluster
Basics about Databricks notebook
Loading Data into Databricks Environment
Download Data
Defining User Define Schema to Load Data in Dataframe
%scala import org.apache.spark.sql.Encoders case class Abalone(Sex: String, Length_in_mm: Double, Diameter_in_mm: Double, Height_in_mm: Double, Whole_in_gm: Double, Shucked_weight_in_gm: Double, Viscera_weight_in_gm: Double, Shell_weight_in_gm: Double, Rings: Double ) val AbaloneSchema = Encoders.product[Abalone].schema val AbaloneDF = spark.read.schema(AbaloneSchema).option("header", "false").csv("/FileStore/tables/abalone.data") AbaloneDF.show()
Adding a new Column Age in Dataframe (Age = Rings + 1.5)
%scala val AbaloneageDF = AbaloneDF.withColumn("Age", AbaloneDF("Rings") + 1.5) AbaloneageDF.show();
Count Number of Records in Dataframe
%scala AbaloneageDF.count()
Printing Schema of Dataframe
%scala AbaloneageDF.printSchema() Output: root |-- Sex: string (nullable = true) |-- Length_in_mm: double (nullable = true) |-- Diameter_in_mm: double (nullable = true) |-- Height_in_mm: double (nullable = true) |-- Whole_in_gm: double (nullable = true) |-- Shucked_weight_in_gm: double (nullable = true) |-- Viscera_weight_in_gm: double (nullable = true) |-- Shell_weight_in_gm: double (nullable = true) |-- Rings: double (nullable = true) |-- Age: double (nullable = true)
Get Statistics of Data
%scala AbaloneageDF.select("Sex", "Length_in_mm", "Diameter_in_mm", "Height_in_mm", "Whole_in_gm", "Shucked_weight_in_gm", "Viscera_weight_in_gm", "Shell_weight_in_gm", "Rings", "Age").describe().show() Output: +-------+----+-------------------+-------------------+-------------------+-------------------+--------------------+--------------------+-------------------+------------------+------------------+ |summary| Sex| Length_in_mm| Diameter_in_mm| Height_in_mm| Whole_in_gm|Shucked_weight_in_gm|Viscera_weight_in_gm| Shell_weight_in_gm| Rings| Age| +-------+----+-------------------+-------------------+-------------------+-------------------+--------------------+--------------------+-------------------+------------------+------------------+ | count|4177| 4177| 4177| 4177| 4177| 4177| 4177| 4177| 4177| 4177| | mean|null| 0.5239920995930099| 0.407881254488869| 0.1395163993296614| 0.82874215944458| 0.35936748862820106| 0.18059360785252604|0.23883085946851795| 9.933684462532918|11.433684462532918| | stddev|null|0.12009291256479936|0.09923986613365941|0.04182705660725731|0.49038901823099795| 0.22196294903322014| 0.10961425025968445|0.13920266952238622|3.2241690320681315|3.2241690320681315| | min| F| 0.075| 0.055| 0.0| 0.002| 0.001| 5.0E-4| 0.0015| 1.0| 2.5| | max| M| 0.815| 0.65| 1.13| 2.8255| 1.488| 0.76| 1.005| 29.0| 30.5| +-------+----+-------------------+-------------------+-------------------+-------------------+--------------------+--------------------+-------------------+------------------+------------------+
Creating Temporary View in Spark so that we can use Spark SQL to Analyze the Data
%scala AbaloneageDF.createOrReplaceTempView("AbaloneData");
Exploratory Data Analysis or EDA
One Visualization to Rule Them All
%sql select Sex, Length_in_mm, Diameter_in_mm, Height_in_mm, Whole_in_gm, Shucked_weight_in_gm, Viscera_weight_in_gm, Shell_weight_in_gm, Rings, Age from AbaloneData;