Real-time Shaker Video Analysis for Shell Project

Date: May 25, 2018
Last Updated: May 27, 2018
Categories:
Projects python tensorflow
Tags:
python python-c-api deep-learning image-processing tensorflow video-processing

Contents


Introduction

In this project, we aim to process video stream by utilizing deep learning tools (tensorflow). The data is from the camera deployed beside the real shaker on well-logging site. When the shaker is spare, the surface of it is clean. However, during the well-logging process, the cuttings flow would come and get captured by the camera.

To build a complete system, we divde our work into two phases.

Training

We use several videoes recoreded from the camera (with high or low resolutions) to train the deep network. Each 5 video frames are grouped together and extracted by FFmpeg, then we crop the frames with a fixed region for each video so that we could concentrate on the information on the surface of the shaker. After that we send the frame groups to the deep network to perform the supervised learning. The frames are divided into 3 classes (None, Light and Heavy). After several steps, the network would be trained well.

Testing

We use a timer coupled with the dual-thread video-stream client in the real application. The dual-thread client is also from the project FFmpeg. One thread is running continuously, it receives the remote stream, extract frames without interruption and save those frames into a circular buffer. The other thread is called by the timer which is activated every few milliseconds. Every time the timer is activated, it would awake the second thread and let it read 5 frames from the circular buffer. The read frames could be feed into the deep network and let it produce the prediction, i.e. the result.

Project Page

See here to view the main page of this project.

Git Page

  • Note that because we have not published the project yet, in the main page there would only be some logs.
  • I only support part of the project, the project in my site is not the complete version. You may need to contact the first author, mikedukiddie to access to the whole one.