Skip to content

tanveer-hussain/Baby-behavior-monitoring-IoT

Repository files navigation

Intelligent Baby Behavior Monitoring using Embedded Vision in IoT for Smart Healthcare Centers

Paper (open access)

Notice

Prior to the codes usage for industrial purposes or re-distribution, please read the licensing information given at the end of README file

There are three sections in this repository:

  • I) Development environment and pre-requisites
  • II) Source code
  • III) Details about implemented methods and working mechanism

I) Development environment and pre-requisites

Python (v3.5) is used to develop the baby monitoring system. The system is developed in Spyder Integrated Development Environment (IDE). The details about libraries used in the implementation are given in Table 1 .

Table 1: Description of libraries used in baby monitoring system

Table-1

II) Source code

There are total 28 files included in the package for seven implemented methods. For each method, four different type of experiments have been performed in separate scripts. Details about file names are described in Table 2 .

Table 2: Description of file names for each method

Source-Codes_Details

The regions of motion are encircled by one or multiple circles depending on the areas of motion in the video frames. The concept of circles (no circle, one circle, multiple circles, multiple circles with minimum overlapping) is illustrated in Fig. 1 where two consecutive frames are processed and after processing converted into binary form and then individual output is shown in the Figure 1 .

Figure 1: Demonstration of concept of circles

Figure-1

a) Implementation details

The working of each step along with functions used in program and its corresponding output with the name of variables are given in Table 3 which are general for all the methods. Whereas Table 4 demonstrates the steps distinctive for each method processing.

Table 3: Generic steps for breathing detection of baby for all seven methods

Table-3

Table 4: Distinctive steps for each method

Table-4

b) How to run the program?

Each script file contains a main function which includes all the programming logic defined for the baby monitoring system. Run this script and choose the input video from the file dialog box. The input parameters and description of variables are explained in Table 5 .

Table 5: Description of variables of Python script

Table-5

III) Details about implemented methods and working mechanism

We have implemented seven methods for the baby monitoring system including frame difference, optical flow, background subtraction and various combination of these methods. The detailed description of all the methods is given in subsequent sections.

1. Frame difference (method 1)

This method comprises of the following steps:
• Calculate pixel-wise difference between two successive frames.
• Convert the difference image into binary.
• Compute binary image to detect motion.
The working mechanism of method 1 is visualized in Figure 2

Figure 2: Overall framework of method 1 (frame difference)

Figure-2

2. Optical flow (method 2)

Optical flow motion detection technique is used to estimate the motion in two consecutive frames of video sequence as shown in Figure. 3 . Optical flow uses the following steps.
• Consecutive frames at time (t, t + 1) are converted into HSI color model.
• These frames are input to optical flow algorithm which detects the flow.
• Flow is converted into binary, and maximum flow regions inside binary image are selected as motion area.

Figure 3: Overall framework of method 2 (optical flow)

Figure-3

3. Background subtraction (method 3)

Background subtraction method relies on generating a foreground mask of the object. Foreground mask is a binary image which contains those pixels that belong to the moving object or motion present in the scene. Background subtraction algorithm calculates the foreground mask by performing a subtraction between the current frame and a background model as shown in Figure. 4 .

Figure 4: Overall framework of method 3 (background subtraction)

Figure-4

4. Frame difference + optical flow (method 4)

Method 4 is the combination of both frame difference and optical flow (method 1 and method 2). In this method, a final binary image is obtained through bitwise AND logical operation of binary images from both methods which is shown in Figure. 5 .

Figure 5: Overall framework of method 4 (frame difference + optical flow)

Figure-5

5. Frame difference + background subtraction (method 5)

Method 5 is the combination of frame difference and background subtraction which intersects the binary image obtained from frame difference and background subtraction. Finally, it estimates the motion from the intersected binary image as visualized in Figure. 6 .

Figure 6: Overall framework of method 5 (frame difference + background subtraction)

Figure-6

6. Optical flow + background subtraction (method 6)

In method 6, optical flow is combined with background subtraction and estimates motion by integration of binary images obtained individually. The process flow is visualized in Figure. 7 .

Figure 7: Overall framework of method 6 (optical flow + background subtraction)

Figure-7

7. Hybrid method (method 7)

Hybrid method (method 7) is the combination of three methods i.e., frame difference, optical flow and background subtraction. The decision of “breathing” or “not breathing” of the subject is taken on the basis of all three methods. These three methods process the video independently till a binary image of estimated motion is obtained. The final binary image of each method is combined by taking logical AND of the three binary images. The system works as follows: It takes video as an input and reads two successive frames from the video, converts it into gray scale and apply Gaussian blur to remove noise. The noise is removed to avoid wrong decision of baby breathing because the system may detect noise as motion. Binary image of detected motion is computed by each method separately and combined by taking intersection between them. On the basis of motion present inside the combined image, breathing or not breathing of baby is decided and displayed on the corresponding video as shown in Figure. 8 .

Figure 8: Overall framework of method 7 (hybrid method)

Figure-8

Contact Me

If you feel any difficulty or errors, please refer to tanveerkhattak37973[at][gmail]

Outsourcing and Licensing

Copyright (c) 2019 "copyright notice checker" Authors. All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  • None of the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Citation


Hussain, Tanveer, Khan Muhammad, Salman Khan, Amin Ullah, Mi Young Lee, and Sung Wook Baik. "Intelligent Baby Behavior Monitoring using Embedded Vision in IoT for Smart Healthcare Centers." Journal of Artificial Intelligence and Systems. J. Artif. Intell. Syst 1, no. 15 (2019): 2019.


If you are interested in similar works related to Computer Vision, you may find some of my other recent papers worthy to read:


Hussain, T., Muhammad, K., Del Ser, J., Baik, S. W., & de Albuquerque, V. H. C. (2019). Intelligent Embedded Vision for Summarization of Multi-View Videos in IIoT. IEEE Transactions on Industrial Informatics.

K. Muhammad, T. Hussain, and S. W. Baik, "Efficient CNN based summarization of surveillance videos for resource-constrained devices," Pattern Recognition Letters, 2018/08/07/ 2018

Hussain, Tanveer, Khan Muhammad, Amin Ullah, Zehong Cao, Sung Wook Baik, and Victor Hugo C. de Albuquerque. "Cloud-Assisted Multi-View Video Summarization using CNN and Bi-Directional LSTM." IEEE Transactions on Industrial Informatics (2019).

K. Muhammad, T. Hussain, M. Tanveer, G. Sannino and V. H. C. de Albuquerque, "Cost-Effective Video Summarization using Deep CNN with Hierarchical Weighted Fusion for IoT Surveillance Networks," in IEEE Internet of Things Journal.
doi: 10.1109/JIOT.2019.2950469

K. Muhammad, H. Tanveer, J. Del Ser, V. Palade and V. H. C. De Albuquerque, "DeepReS: A Deep Learning-based Video Summarization Strategy for Resource-Constrained Industrial Surveillance Scenarios," in IEEE Transactions on Industrial Informatics.
doi: 10.1109/TII.2019.2960536
keywords: {Big Data;Computer Vision;Deep Learning;Video Summarization;IIoT;Resource-Constrained Devices},
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8936419&isnumber=4389054