Updated On : Jul-25,2022 Time Investment : ~25 mins

Keras: Guide to Create Simple Neural Networks in Python

Traditional ML models (linear regression, logistic regression, decision trees, random forests, gradient boosting machines, etc) are quite good at solving tasks involving structured datasets (tabular data). Over time these models were even tried on unstructured datasets (text, image, audio, etc) but the results were not that impressive.

Various research showed that deep neural networks consisting of many layers of different types (dense, convolution, recurrent, etc.) are quite good at handling unstructured datasets.

This gave rise to the whole new field of deep learning which primarily concentrates on creating deep neural networks to solve tasks.

Over the years, many Python libraries (Keras, PyTorch, Tensorflow, Theano, MXNet, JAX, Haiku, Flax, etc) were created to create neural networks. All these libraries offer simple APIs to create and train neural networks faster.

What can you learn from this article?

As a part of this tutorial, we have explained how to create neural networks using Python library Keras which is now available through Python library Tensorflow. The tutorial explains how we can create simple neural networks using sequential API and functional API of Keras. We have created neural networks to solve classification and regression tasks on toy datasets available from scikit-learn.

The whole flow of ML tasks starting from downloading the dataset, creating the model, training the model, and evaluating performance (by calculating ML metrics) is covered.

This tutorial is a very good starting point for someone who is new to keras library. It'll make you aware of basic API within an hour.

Below, we have listed important sections of the Tutorial to give an overview of the material covered.

Important Sections Of Tutorial

  1. Regression
    • 1.1 Load Dataset
    • 1.2 Normalize Data
    • 1.3 Create Neural Network Regressor using "Sequential()" or "Model()"
      • Sequential API
      • Functional API
    • 1.4 Compile Neural Network using "compile()"
    • 1.5 Train Neural Network using "fit()"
    • 1.6 Make Predictions using "predict()"
    • 1.7 Evaluate Performance using "evaluate()"
  2. Classification
    • Same sub-sections as regression section

Below, we have imported the necessary Python libraries that we have used in our tutorial and printed the versions as well.

import tensorflow as tf
from tensorflow import keras

print("Tensorflow Version : {}".format(tf.__version__))
print("Keras Version : {}".format(keras.__version__))
Tensorflow Version : 2.7.0
Keras Version : 2.7.0

1. Regression

In this section, we have explained how we can create simple neural networks using Keras to solve regression tasks. We have used Boston housing dataset available from scikit-learn for our purpose. We'll use a neural network to predict house prices using other data features of the house.

1.1 Load Dataset

Her, we have loaded Boston housing dataset available from scikit-learn. The dataset has 13 features of houses like number of bedrooms, crime rate in area, etc. The target value is a median value of a house in 1000's dollars. The dataset is loaded using load_boston() method of datasets module.

After loading dataset, the dataset is divided into train (80%) and test (20%) sets using train_test_split() function of scikit-learn.

from sklearn import datasets
from sklearn.model_selection import train_test_split

X, Y = datasets.load_boston(return_X_y=True)

X_train, X_test, Y_train, Y_test = train_test_split(X, Y, train_size=0.8, random_state=123)

samples, features = X_train.shape

X_train.shape, X_test.shape, Y_train.shape, Y_test.shape
((404, 13), (102, 13), (404,), (102,))
samples, features
(404, 13)

1.2 Normalize Data

Here, we have normalized our train and test datasets to bring all columns of data in almost the same range. When we normalize data, there is less variance in column values. This helps the optimization algorithm to converge faster as a high difference in values of columns can give hard times to optimization algorithm. It can even prevent it from converging sometime.

To normalize data, we first calculated mean and standard deviation of each column of train data. Then, we subtracted mean from both datasets and divided subtracted values by standard deviation. This will help us get better results faster.

mean = X_train.mean(axis=0)
std = X_train.std(axis=0)

X_train = (X_train - mean)/ std
X_test = (X_test - mean)/ std

1.3 Create Neural Network Regressor using "Sequential()" or "Model()"

Here, we have explained different ways of creating a neural network using Keras. Currently, keras provides two different APIs for creating neural networks.

  1. Sequential API - Here, we first create an instance of Sequential() object and add one by one layer (dense, convolution, max pooling, etc) to it. The network will process data in sequence as the layers are added to it. It won't let us create a complicated structure where we want to try something other than simple sequential execution (E.g., we want to take output of a few previous layers and add them.).
  2. Functional API - Here, we work with layers like they are functions. We initialize layers and then process data by calling instance of a layer. This way of creating networks gives us flexibility as it opens a host of different architectures which we can't try with sequential API. Keras provides Model() class for functional API.

In the majority of the cases, Sequential API will be able to do tasks and you'll rarely need Functional API. The very complex architectures (like ResNet, MobileNet, etc.) working with unstructured data are generally designed using Functional API.

1.3.1 Sequential API

In this section, we have created our first neural network using Sequential API of Keras.

The network consists of 4 dense layers with output units 5, 10, 15, and 1 respectively. The first layer parameter input_shape is given a tuple specifying the shape of input data. The later layers will figure out shape by themselves. All first three layers apply relu (Rectified Linear Unit) activation to the output of layer. The relu function (relu(x) = max(0, x)) simply removes negative values and replaces them with 0s in processed data.

In order to create a network, we have initialized instance of Sequential from 'models' sub-module of Keras. Various layers are available from 'layers' sub-module. We have used Dense layer for our task.

When initializing Sequential object, we have given a list of layers inside it. The layers are created using Dense() constructor. The first argument to it is output units of that layer. The constructor has other parameters like activation, use_bias, kernel_initializer, activity_regularizer, etc.

The first layer will take input data with shape (batch_size, no_of_features) and transform it to shape (batch_size, 5) after processing. The second layer will transform output data from first layer to shape (batch_size, 10). The third layer will transform output data from first layer to shape (batch_size, 15). The fourth and last layer will transform data to shape (batch_size, 1) after processing which is an output of our network. The output of fourth layer is a prediction of our network.

After initializing network, we have printed a summary of output shapes and parameters count of individual layers. Then, in the next cell, we have also visualized model using Keras visualization util.

from tensorflow.keras import models
from tensorflow.keras import layers


regressor = models.Sequential(
                                [
                                    layers.Dense(5, input_shape=(features,), activation="relu"), ## First Hidden Layer
                                    layers.Dense(10, activation="relu"), ## Second Hidden Layer
                                    layers.Dense(15, activation="relu"), ## Third Hidden Layer
                                    layers.Dense(1),
                                ]
                            )

regressor.summary()
Model: "sequential_17"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 dense_77 (Dense)            (None, 5)                 70

 dense_78 (Dense)            (None, 10)                60

 dense_79 (Dense)            (None, 15)                165

 dense_80 (Dense)            (None, 1)                 16

=================================================================
Total params: 311
Trainable params: 311
Non-trainable params: 0
_________________________________________________________________
keras.utils.plot_model(regressor, to_file="regressor.png",
                       show_shapes=True,
                       show_dtype=True,
                       show_layer_activations=True,
                       show_layer_names=True)

Keras: Guide to Create Simple Neural Networks in Python

1.3.2 Sequential API

Here, we have explained one more way of creating a neural network using Sequential API. The network is almost same as earlier with only difference being that we have initialized Sequential instance first and then added layers to it one by one using add() method. This will create same network as we had given layers as a list to constructor.

from tensorflow.keras import models
from tensorflow.keras import layers


regressor2 = models.Sequential()

regressor2.add(layers.Dense(5, input_shape=(features,), activation="relu"))
regressor2.add(layers.Dense(10, activation="relu"))
regressor2.add(layers.Dense(15, activation="relu"))
regressor2.add(layers.Dense(1))

regressor2.summary()
Model: "sequential_18"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 dense_81 (Dense)            (None, 5)                 70

 dense_82 (Dense)            (None, 10)                60

 dense_83 (Dense)            (None, 15)                165

 dense_84 (Dense)            (None, 1)                 16

=================================================================
Total params: 311
Trainable params: 311
Non-trainable params: 0
_________________________________________________________________

1.3.3 Functional API

Here, we have created same neural network as or previous sections using Functional API.

First, we have created an instance of dense layers which we can call later. The layers are defined like previous sections only but are now stored in independent variables.

Then, we created an input layer using Input() constructor. This layer is just a placeholder declaring shape of input data. After defining input layer, we have called first layer with input layer. The output is stored in a variable which is given to second layer for processing and so on. The output from last layer is a prediction of network. Here, we called each layer to process input data hence this API is referred to as Functional API.

To create a model instance, we need to initialize Model object with input and output. In our case, the input will be input layer and output will be output object from last layer. When creating network like this we can also check shape of output layers for verification purposes to better understand layer processing.

After defining network, we have also printed a summary of shapes and parameters count of layers using summary() function.

from tensorflow.keras import models
from tensorflow.keras import layers

layer1 = layers.Dense(5, activation="relu") ## First Hidden Layer
layer2 = layers.Dense(10, activation="relu") ## Second Hidden Layer
layer3 = layers.Dense(15, activation="relu") ## Third Hidden Layer
final_layer = layers.Dense(1) ## Final Layer

inputs = keras.Input(shape=(features, )) ## Input layer with Input data shape

x = layer1(inputs)
x = layer2(x)
x = layer3(x)
outputs = final_layer(x)

regressor3 = models.Model(inputs=inputs, outputs=outputs)

regressor3.summary()
Model: "model_4"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 input_4 (InputLayer)        [(None, 13)]              0

 dense_85 (Dense)            (None, 5)                 70

 dense_86 (Dense)            (None, 10)                60

 dense_87 (Dense)            (None, 15)                165

 dense_88 (Dense)            (None, 1)                 16

=================================================================
Total params: 311
Trainable params: 311
Non-trainable params: 0
_________________________________________________________________

1.4 Compile Neural Network using "compile()"

After defining Keras network, the next step is to compile it. The compilation step simply sets information like what optimizer, loss function, evaluation metrics, etc to use.

We can call compile() method on model object to compile models with necessary information. The optimizer, loss and metrics arguments accepts string as well as callable as input. We can give name of optimizer, loss function, and metrics to use for model. This will initialize them with default parameters. Keras let us also create an instance of optimizer (from 'keras.optimizers'), loss function (from 'keras.losses') and metrics (from 'keras.metrics') as well.

Though in a majority of situation providing string values that initializes these objects with default values will work well. You'll need to create an optimizer object, loss function, and metrics only when you need to try them with different values than default to improve results.

Below, we have compiled network to use 'sgd' (stochastic gradient descent) optimizer, 'mae' (mean absolute error) loss function and 'mse' (mean squared error) metric. The MAE and MSE are commonly used metrics for regression tasks.

regressor.compile(optimizer="sgd", loss="mae", metrics=["mse"])

1.5 Train Neural Network using "fit()"

Keras provides us with a method named fit() to train our network after it is compiled. This method will run only if it is called after compilation else it'll raise an error.

The fit method let us provide information below information through its parameters.

  • x - Train data features. This can be a numpy array, tensorflow tensor, or keras generator object.
  • y - Train target labels. This can be a numpy array or tensorflow tensor.
  • batch_size - Batch size.
  • epochs - Number of epochs.
  • validation_data - Validation data. This accepts tuple of numpy arrays (x_val, y_val) or tensorflow tensors (x_val, y_val) specifying validation data. We can also give keras generator here.
  • validation_split - Validation data percent from train data. If we don't want to provide validation data explicitly but want to use a fraction of train data for validation then we can provide a float value in the range 0-1 to this parameter. It'll take that much of train data as validation data. E.g., 0.2 will take 20% train data as validation data.
  • shuffle - It accepts boolean values specifying whether to shuffle train data or not.
  • callbacks - It accepts various callback functions that can be executed during various steps of training like before epoch, after completion of an epoch, etc. We have a detailed tutorial on Keras callbacks which we would recommend that readers go through in their free time.

The fit() method after completion returns a History object which has information about training process like values of train/Val loss and values of train/Val metrics after each epoch. The information is available through history parameter of History object as a dictionary.

Below, we have called fit() method twice (first time for 10 epochs and then for another 15 epochs) to train our network. We can notice that it prints loss and metric values after each epoch for both train and validation datasets.

history1 = regressor.fit(x=X_train, y=Y_train, batch_size=32, epochs=10, verbose=1)
Epoch 1/10
13/13 [==============================] - 0s 905us/step - loss: 22.0373 - mse: 572.0268
Epoch 2/10
13/13 [==============================] - 0s 2ms/step - loss: 21.4763 - mse: 547.8861
Epoch 3/10
13/13 [==============================] - 0s 7ms/step - loss: 20.6195 - mse: 512.7371
Epoch 4/10
13/13 [==============================] - 0s 1ms/step - loss: 18.8489 - mse: 445.8504
Epoch 5/10
13/13 [==============================] - 0s 2ms/step - loss: 14.8295 - mse: 297.5774
Epoch 6/10
13/13 [==============================] - 0s 3ms/step - loss: 6.6707 - mse: 79.1585
Epoch 7/10
13/13 [==============================] - 0s 3ms/step - loss: 4.8571 - mse: 46.9216
Epoch 8/10
13/13 [==============================] - 0s 1ms/step - loss: 4.3183 - mse: 38.4516
Epoch 9/10
13/13 [==============================] - 0s 2ms/step - loss: 4.0402 - mse: 34.1163
Epoch 10/10
13/13 [==============================] - 0s 2ms/step - loss: 3.7723 - mse: 30.8966
history1.history
{'loss': [22.037322998046875,
  21.47634506225586,
  20.619464874267578,
  18.848920822143555,
  14.82948112487793,
  6.670732021331787,
  4.857050895690918,
  4.318281173706055,
  4.040211200714111,
  3.7723026275634766],
 'mse': [572.0267944335938,
  547.8861083984375,
  512.7371215820312,
  445.8503723144531,
  297.577392578125,
  79.15851593017578,
  46.92158126831055,
  38.451576232910156,
  34.11625289916992,
  30.896575927734375]}
history2 = regressor.fit(x=X_train, y=Y_train, batch_size=32, epochs=15, verbose=2)
Epoch 1/15
13/13 - 0s - loss: 3.6436 - mse: 28.8273 - 12ms/epoch - 899us/step
Epoch 2/15
13/13 - 0s - loss: 3.4942 - mse: 26.5449 - 10ms/epoch - 792us/step
Epoch 3/15
13/13 - 0s - loss: 3.4167 - mse: 25.2244 - 40ms/epoch - 3ms/step
Epoch 4/15
13/13 - 0s - loss: 3.3429 - mse: 24.4244 - 26ms/epoch - 2ms/step
Epoch 5/15
13/13 - 0s - loss: 3.1979 - mse: 22.3889 - 38ms/epoch - 3ms/step
Epoch 6/15
13/13 - 0s - loss: 3.1630 - mse: 22.1232 - 34ms/epoch - 3ms/step
Epoch 7/15
13/13 - 0s - loss: 2.9991 - mse: 20.1942 - 30ms/epoch - 2ms/step
Epoch 8/15
13/13 - 0s - loss: 3.0413 - mse: 20.0644 - 15ms/epoch - 1ms/step
Epoch 9/15
13/13 - 0s - loss: 2.9247 - mse: 18.9105 - 9ms/epoch - 727us/step
Epoch 10/15
13/13 - 0s - loss: 2.9486 - mse: 19.2644 - 36ms/epoch - 3ms/step
Epoch 11/15
13/13 - 0s - loss: 2.9347 - mse: 19.0133 - 39ms/epoch - 3ms/step
Epoch 12/15
13/13 - 0s - loss: 2.9305 - mse: 18.7458 - 14ms/epoch - 1ms/step
Epoch 13/15
13/13 - 0s - loss: 2.7300 - mse: 17.0204 - 9ms/epoch - 730us/step
Epoch 14/15
13/13 - 0s - loss: 2.7032 - mse: 16.7007 - 22ms/epoch - 2ms/step
Epoch 15/15
13/13 - 0s - loss: 2.8685 - mse: 17.7916 - 13ms/epoch - 964us/step

1.6 Make Predictions using "predict()"

To make predictions, network provides us with predict() method. This method accepts data features as numpy array, tensorflow tensor, or keras generator object.

Below, we have made predictions on train and test datasets.

train_preds = regressor.predict(X_train)

train_preds[:5]
array([[45.351154],
       [16.935385],
       [20.20816 ],
       [31.050314],
       [17.825169]], dtype=float32)
test_preds = regressor.predict(X_test)

test_preds[:5]
array([[13.06893 ],
       [30.386465],
       [45.47716 ],
       [14.395776],
       [32.69201 ]], dtype=float32)

1.7 Evaluate Performance using "evaluate()"

We can evaluate the performance of network using evaluate() method. It'll calculate loss and metric values that were set when compiling network. We need to give data features (x) and target values (y) to evaluate network performance on them.

In our case, it returns MAE and MSE as set by us earlier during compilation step. We have evaluated network performance on both train and test datasets below.

We can also calculate metrics by calling metric function available from 'keras.metrics' module. We need to provide function actual target values and predicted values. We have calculated MSE using mse() function by providing actual target values and predicted values.

Apart from these ways, we can also use various metrics calculation functions available from scikit-learn. We have calculated r2 score for train and test predictions. It is a commonly used metric to evaluate performance of regression model and has a value in the range 0-1. The values near 1 are considered signs of a good generalized model.

If you are interested in learning about various models available from sklearn then we would recommend that you spend time on the below link.

train_mae, train_mse = regressor.evaluate(x=X_train, y=Y_train, verbose=0)

print("Train MAE : {:.2f}".format(train_mae))
print("Train MSE : {:.2f}".format(train_mse))
Train MAE : 2.91
Train MSE : 17.01
test_mae, test_mse = regressor.evaluate(x=X_test, y=Y_test, verbose=0)

print("Train MAE : {:.2f}".format(test_mae))
print("Train MSE : {:.2f}".format(test_mse))
Train MAE : 3.38
Train MSE : 29.78
print("Train MSE : {:.2f}".format(keras.metrics.mse(Y_train, train_preds.squeeze())))
print("Test  MSE : {:.2f}".format(keras.metrics.mse(Y_test, test_preds.squeeze())))
Train MSE : 17.01
Test  MSE : 29.78
from sklearn.metrics import r2_score

print("Train R^2 Score : {:.2f}".format(r2_score(Y_train, train_preds.squeeze())))
print("Test  R^2 Score : {:.2f}".format(r2_score(Y_test, test_preds.squeeze())))
Train R^2 Score : 0.80
Test  R^2 Score : 0.64

2. Classification

In this section, we have explained how to create a simple network using Keras to solve a classification task. We have used toy dataset available from scikit-learn for our purpose.

2.1 Load Dataset

Below, we have loaded Breast cancer dataset available from scikit-learn. The dataset has 30 features (independent variables). They are various measures of a tumor. The target variable is a binary telling us whether a tumor is benign (0) or malignant (1).

We have loaded dataset directly as numpy array by setting return_X_y parameter of load_breast_cancer() method to True.

After loading the dataset, we have divided it into train (80%) and test (20%) sets. We have printed the shapes of datasets for reference purposes. We have also stored number of features and classes in different variables as we'll need them later.

from sklearn import datasets
from sklearn.model_selection import train_test_split
import numpy as np

X, Y = datasets.load_breast_cancer(return_X_y=True)

X_train, X_test, Y_train, Y_test = train_test_split(X, Y, train_size=0.8, stratify=Y, random_state=123)

samples, features = X_train.shape
classes = np.unique(Y_test)

X_train.shape, X_test.shape, Y_train.shape, Y_test.shape
((455, 30), (114, 30), (455,), (114,))
samples, features, classes
(455, 30, array([0, 1]))

2.2 Normalize Data

Here, we have normalized our data like regression section. As explained earlier, this helps our optimization (SGD) algorithm to converge faster. We have used train data mean and standard deviation to normalize train and test datasets.

mean = X_train.mean(axis=0)
std = X_train.std(axis=0)

X_train = (X_train - mean)/ std
X_test = (X_test - mean)/ std

2.3 Create Neural Network Classifier

Here, we have created a network that we'll use for our classification task. The network consists of 4 dense layers like regression section. We have created a network using Sequential API of Keras.

The dense layers have 5, 10, 15, and 1 output units respectively. The first three layers have relu activation function whereas last layer has sigmoid activation function. The sigmoid activation function takes any input and transforms it into a float in the range 0-1. The output of last layer will be a prediction of our network which is an output of sigmoid function in this case.

After defining network, we have printed a summary of shapes and parameters count of layers. We have also plotted network using visualization util of Keras.

from tensorflow.keras import models
from tensorflow.keras import layers


classifier = models.Sequential(
                                [
                                    layers.Dense(5, input_shape=(features,), activation="relu"),
                                    layers.Dense(10, activation="relu"),
                                    layers.Dense(15, activation="relu"),
                                    layers.Dense(1, activation="sigmoid"),
                                ]
                            )

classifier.summary()
Model: "sequential_19"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 dense_89 (Dense)            (None, 5)                 155

 dense_90 (Dense)            (None, 10)                60

 dense_91 (Dense)            (None, 15)                165

 dense_92 (Dense)            (None, 1)                 16

=================================================================
Total params: 396
Trainable params: 396
Non-trainable params: 0
_________________________________________________________________
keras.utils.plot_model(classifier, to_file="classifier.png",
                       show_shapes=True,
                       show_dtype=True,
                       show_layer_activations=True,
                       show_layer_names=True)

Keras: Guide to Create Simple Neural Networks in Python

2.4 Compile Neural Network

Below, we have compiled our classification network to use SGD optimizer, binary cross entropy loss, and accuracy metric. As we have binary task of classifying tumors as malignant or benign, we have used binary cross entropy loss. The accuracy metric simply measures percentage of target labels that were correctly predicted by model.

classifier.compile(optimizer="sgd", loss="binary_crossentropy", metrics=["accuracy"])

2.5 Train Neural Network

In this section, we have trained our network by calling fit() method on model. We have trained network for 10 and 15 epochs respectively. The log messages show loss and accuracy metric values after each epoch. We can notice from these values that our network has improved after each epoch.

history1 = classifier.fit(x=X_train, y=Y_train, batch_size=32, epochs=10, verbose=1)
Epoch 1/10
15/15 [==============================] - 0s 2ms/step - loss: 0.7469 - accuracy: 0.4330
Epoch 2/10
15/15 [==============================] - 0s 3ms/step - loss: 0.7171 - accuracy: 0.4967
Epoch 3/10
15/15 [==============================] - 0s 3ms/step - loss: 0.6939 - accuracy: 0.5648
Epoch 4/10
15/15 [==============================] - 0s 3ms/step - loss: 0.6770 - accuracy: 0.6418
Epoch 5/10
15/15 [==============================] - 0s 3ms/step - loss: 0.6625 - accuracy: 0.7275
Epoch 6/10
15/15 [==============================] - 0s 2ms/step - loss: 0.6500 - accuracy: 0.7890
Epoch 7/10
15/15 [==============================] - 0s 2ms/step - loss: 0.6380 - accuracy: 0.8088
Epoch 8/10
15/15 [==============================] - 0s 1ms/step - loss: 0.6258 - accuracy: 0.8132
Epoch 9/10
15/15 [==============================] - 0s 729us/step - loss: 0.6134 - accuracy: 0.8264
Epoch 10/10
15/15 [==============================] - 0s 898us/step - loss: 0.6005 - accuracy: 0.8330
history1.history
{'loss': [0.7468659281730652,
  0.7170534729957581,
  0.6938840746879578,
  0.6769975423812866,
  0.6625075936317444,
  0.6499618291854858,
  0.6379912495613098,
  0.6257580518722534,
  0.6133816838264465,
  0.6005017161369324],
 'accuracy': [0.43296703696250916,
  0.49670329689979553,
  0.5648351907730103,
  0.6417582631111145,
  0.7274725437164307,
  0.7890110015869141,
  0.8087912201881409,
  0.8131868243217468,
  0.8263736367225647,
  0.8329670429229736]}
history2 = classifier.fit(x=X_train, y=Y_train, batch_size=32, epochs=15, verbose=1)
Epoch 1/15
15/15 [==============================] - 0s 2ms/step - loss: 0.5872 - accuracy: 0.8484
Epoch 2/15
15/15 [==============================] - 0s 2ms/step - loss: 0.5723 - accuracy: 0.8571
Epoch 3/15
15/15 [==============================] - 0s 3ms/step - loss: 0.5568 - accuracy: 0.8637
Epoch 4/15
15/15 [==============================] - 0s 1ms/step - loss: 0.5412 - accuracy: 0.8703
Epoch 5/15
15/15 [==============================] - 0s 853us/step - loss: 0.5242 - accuracy: 0.8747
Epoch 6/15
15/15 [==============================] - 0s 894us/step - loss: 0.5074 - accuracy: 0.8813
Epoch 7/15
15/15 [==============================] - 0s 864us/step - loss: 0.4892 - accuracy: 0.8835
Epoch 8/15
15/15 [==============================] - 0s 928us/step - loss: 0.4706 - accuracy: 0.8813
Epoch 9/15
15/15 [==============================] - 0s 888us/step - loss: 0.4517 - accuracy: 0.8857
Epoch 10/15
15/15 [==============================] - 0s 940us/step - loss: 0.4331 - accuracy: 0.8879
Epoch 11/15
15/15 [==============================] - 0s 918us/step - loss: 0.4148 - accuracy: 0.8967
Epoch 12/15
15/15 [==============================] - 0s 912us/step - loss: 0.3965 - accuracy: 0.8989
Epoch 13/15
15/15 [==============================] - 0s 1ms/step - loss: 0.3792 - accuracy: 0.9033
Epoch 14/15
15/15 [==============================] - 0s 5ms/step - loss: 0.3629 - accuracy: 0.9077
Epoch 15/15
15/15 [==============================] - 0s 2ms/step - loss: 0.3461 - accuracy: 0.9099

2.6 Make Predictions

Below, we have made predictions using our trained network by calling predict() method. We have made predictions on train and test datasets both. As we had explained earlier, the last layer of network is sigmoid function hence output of network will be in range 0-1.

Our actual target labels are binary (0 or 1). We can convert these probabilities to binary labels by setting a threshold at 0.5 and predicting label as 1 if a value is greater than 0.5 else 0 for less than 0.5.

train_preds = classifier.predict(X_train)

train_preds[:5]
array([[0.7017692 ],
       [0.7622429 ],
       [0.73560596],
       [0.70808345],
       [0.74465746]], dtype=float32)
test_preds = classifier.predict(X_test)

test_preds[:5]
array([[0.5614008 ],
       [0.06049469],
       [0.78290427],
       [0.69001055],
       [0.6275265 ]], dtype=float32)
train_preds_classes = (train_preds > 0.5).astype(np.float32)
test_preds_classes = (test_preds > 0.5).astype(np.float32)

train_preds_classes[:5], train_preds_classes[:5]
(array([[1.],
        [1.],
        [1.],
        [1.],
        [1.]], dtype=float32),
 array([[1.],
        [1.],
        [1.],
        [1.],
        [1.]], dtype=float32))

2.7 Evaluate Performance

In this section, we have evaluated the performance of our classification network.

We have calculated loss and accuracy on both train and test datasets using evaluate() method. We can notice from the results that our model is doing a good job a the task.

We have also calculated loss and accuracy values separately using methods available from keras and sklearn for verification purposes.

Apart from accuracy, we have also calculated classification report metrics which have precision, recall, and f1-score per target class. It helps us better understand for which classes our model is doing a good job and for which class is not that good.

train_loss, train_accuracy = classifier.evaluate(x=X_train, y=Y_train, verbose=0)

print("Train Binary CrossEntropy : {:.2f}".format(train_loss))
print("Train Accuracy : {:.2f}".format(train_accuracy))
Train Binary CrossEntropy : 0.34
Train Accuracy : 0.92
test_loss, test_accuracy = classifier.evaluate(x=X_test, y=Y_test, verbose=0)

print("Test Binary CrossEntropy : {:.2f}".format(test_loss))
print("Test Accuracy : {:.2f}".format(test_accuracy))
Test Binary CrossEntropy : 0.32
Test Accuracy : 0.94
from tensorflow.keras.metrics import binary_crossentropy

print("Train Binary CrossEntropy : {:.2f}".format(binary_crossentropy(Y_train, train_preds.squeeze())))
print("Test  Binary CrossEntropy : {:.2f}".format(binary_crossentropy(Y_test, test_preds.squeeze())))
Train Binary CrossEntropy : 0.34
Test  Binary CrossEntropy : 0.32
from sklearn.metrics import accuracy_score

print("Train Accuracy : {:.2f}".format(accuracy_score(Y_train, train_preds_classes.squeeze())))
print("Test  Accuracy : {:.2f}".format(accuracy_score(Y_test, test_preds_classes.squeeze())))
Train Accuracy : 0.92
Test  Accuracy : 0.94
from sklearn.metrics import classification_report

print("Test Data Classification Report : ")
print(classification_report(Y_test, test_preds_classes.squeeze()))
Test Data Classification Report :
              precision    recall  f1-score   support

           0       0.95      0.88      0.91        42
           1       0.93      0.97      0.95        72

    accuracy                           0.94       114
   macro avg       0.94      0.93      0.93       114
weighted avg       0.94      0.94      0.94       114

Sunny Solanki  Sunny Solanki

YouTube Subscribe Comfortable Learning through Video Tutorials?

If you are more comfortable learning through video tutorials then we would recommend that you subscribe to our YouTube channel.

Need Help Stuck Somewhere? Need Help with Coding? Have Doubts About the Topic/Code?

When going through coding examples, it's quite common to have doubts and errors.

If you have doubts about some code examples or are stuck somewhere when trying our code, send us an email at coderzcolumn07@gmail.com. We'll help you or point you in the direction where you can find a solution to your problem.

You can even send us a mail if you are trying something new and need guidance regarding coding. We'll try to respond as soon as possible.

Share Views Want to Share Your Views? Have Any Suggestions?

If you want to

  • provide some suggestions on topic
  • share your views
  • include some details in tutorial
  • suggest some new topics on which we should create tutorials/blogs
Please feel free to contact us at coderzcolumn07@gmail.com. We appreciate and value your feedbacks. You can also support us with a small contribution by clicking DONATE.


Subscribe to Our YouTube Channel

YouTube SubScribe

Newsletter Subscription