In this notebook we provide an example of Image Segmentation in practice, using the OpenCV, Keras and TensorFlow libraries. If you run the notebook, you will see a video recorded from the windscreen of a window, superimposed with a segmentation layer, that highlights the current lane.
This is what the output that you should see:
This particular demonstration requires an advanced set of libraries, that can be quite tricky to install. We tested this notbook in both Windows and MacOS, and it should function correctly.
We need a particular version of the opencv library for it to work correctly in both platforms, and therefore we will ensure that some other versions that you might have installed are not present (using the pip uninstall command).
!pip uninstall opencv-python opencv-contrib-python -y
!pip install scipy keras tensorflow opencv-python-headless h5py==2.10.0
%matplotlib auto
WARNING: Skipping opencv-python as it is not installed. WARNING: Skipping opencv-contrib-python as it is not installed. Requirement already satisfied: scipy in /opt/anaconda3/lib/python3.8/site-packages (1.5.0) Requirement already satisfied: keras in /opt/anaconda3/lib/python3.8/site-packages (2.4.3) Requirement already satisfied: tensorflow in /opt/anaconda3/lib/python3.8/site-packages (2.3.1) Requirement already satisfied: opencv-python-headless in /opt/anaconda3/lib/python3.8/site-packages (4.4.0.46) Requirement already satisfied: h5py==2.10.0 in /opt/anaconda3/lib/python3.8/site-packages (2.10.0) Requirement already satisfied: numpy>=1.14.5 in /opt/anaconda3/lib/python3.8/site-packages (from scipy) (1.19.2) Requirement already satisfied: pyyaml in /opt/anaconda3/lib/python3.8/site-packages (from keras) (5.3.1) Requirement already satisfied: gast==0.3.3 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (0.3.3) Requirement already satisfied: opt-einsum>=2.3.2 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (3.3.0) Requirement already satisfied: termcolor>=1.1.0 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (1.1.0) Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (1.1.2) Requirement already satisfied: absl-py>=0.7.0 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (0.11.0) Requirement already satisfied: astunparse==1.6.3 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (1.6.3) Requirement already satisfied: tensorboard<3,>=2.3.0 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (2.4.0) Requirement already satisfied: protobuf>=3.9.2 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (3.13.0) Requirement already satisfied: wheel>=0.26 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (0.34.2) Requirement already satisfied: six>=1.12.0 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (1.15.0) Requirement already satisfied: grpcio>=1.8.6 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (1.33.2) Requirement already satisfied: wrapt>=1.11.1 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (1.11.2) Requirement already satisfied: tensorflow-estimator<2.4.0,>=2.3.0 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (2.3.0) Requirement already satisfied: google-pasta>=0.1.8 in /opt/anaconda3/lib/python3.8/site-packages (from tensorflow) (0.2.0) Requirement already satisfied: google-auth<2,>=1.6.3 in /opt/anaconda3/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow) (1.23.0) Requirement already satisfied: werkzeug>=0.11.15 in /opt/anaconda3/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow) (1.0.0) Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /opt/anaconda3/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow) (0.4.2) Requirement already satisfied: markdown>=2.6.8 in /opt/anaconda3/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow) (3.3.3) Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /opt/anaconda3/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow) (1.7.0) Requirement already satisfied: requests<3,>=2.21.0 in /opt/anaconda3/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow) (2.24.0) Requirement already satisfied: setuptools>=41.0.0 in /opt/anaconda3/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow) (49.2.0.post20200714) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/anaconda3/lib/python3.8/site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (4.1.1) Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.5" in /opt/anaconda3/lib/python3.8/site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (4.6) Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/anaconda3/lib/python3.8/site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (0.2.8) Requirement already satisfied: requests-oauthlib>=0.7.0 in /opt/anaconda3/lib/python3.8/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow) (1.3.0) Requirement already satisfied: certifi>=2017.4.17 in /opt/anaconda3/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow) (2020.6.20) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/anaconda3/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow) (1.25.9) Requirement already satisfied: idna<3,>=2.5 in /opt/anaconda3/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow) (2.10) Requirement already satisfied: chardet<4,>=3.0.2 in /opt/anaconda3/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow) (3.0.4) Requirement already satisfied: pyasn1>=0.1.3 in /opt/anaconda3/lib/python3.8/site-packages (from rsa<5,>=3.1.4; python_version >= "3.5"->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow) (0.4.8) Requirement already satisfied: oauthlib>=3.0.0 in /opt/anaconda3/lib/python3.8/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow) (3.1.0) Using matplotlib backend: MacOSX
import os
import cv2
import numpy as np
from keras.models import model_from_json
For the purposes of this notebook, we will use a pre-trained model, that will be able to identify the segment of the road that belongs to the current lane.
The following function will resize the frames of the video to a size that is compatible with our model.
def prepare_image(image):
resized_image = cv2.resize(image, (160, 80))
img = np.array(resized_image)[None, :, :, :]
return img
The following function will pretinct the presense of a lane segment on a perticular frame
def lane_prediction(img):
global lane_fits
# Predict lane positions with the pre-trained neural network
prediction = model.predict(img)[0] * 255
lane_fits.append(prediction)
# Keep track of last five preditions to average
if len(lane_fits) > 5:
lane_fits = lane_fits[1:]
averge_fit = np.mean(np.array([i for i in lane_fits]), axis = 0)
# Denote lane in green color
blanks = np.zeros_like(averge_fit).astype(np.uint8)
lane_drawn = np.dstack((blanks, averge_fit, blanks))
# Resize the lane drawing to match original image
lane_image = cv2.resize(lane_drawn, (1280, 720))
lane_image_norm = cv2.normalize(lane_image, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8U)
return lane_image_norm
Load video file and pre-trained model
# path to video file
video_file = 'data/example.mp4'
# load video file
cap = cv2.VideoCapture(video_file)
# store lane prediction results
lane_fits = []
# load the pre-trained model
json_file = open('model/model.json', 'r')
json_model = json_file.read()
json_file.close()
model = model_from_json(json_model)
# load the wieghts of the model
model.load_weights('model/model.h5')
Read the video frames in a loop and send to model for lane detection
Press "Q" to exit the streaming
while(cap.isOpened()):
success, frame = cap.read()
if not success:
break
original_image = cv2.resize(frame, (1280, 720))
img = prepare_image(frame)
lanes = lane_prediction(img)
# merge detection results to original image
merged_image = cv2.addWeighted(original_image, 1, lanes, 1, 0)
cv2.imshow('Lane detection', merged_image)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
cv2.waitKey(1)
-1