OpenCV is an open-source library for machine learning, computer vision, and Image processing. It consists of hundreds of pre-built algorithms and techniques due to which it is used extensively for dealing with images and videos. It supports multiple languages including Python, C++, and Java, and platforms like Windows, iOS, Android, etc. The library can be used for numerous applications including face recognition, object detection, pedestrian count, etc. In this article, we will be doing one such application using this wonderful library.
Before getting started with the techie part let’s first understand the problem.
Zone intrusion detection is a technique to protect private buildings or property from invasion by unwanted people. When an intruder enters a certain zone without the knowledge of the owner in unwanted hours, an alarm system is activated that makes the owner aware of the intrusion.
A Zone intrusion detector can consist of several devices including a motion detector, contact detector, contact switches, etc. However, in this project, we will completely rely on the video camera and use OpenCV along with some other helpful libraries to build our own zone intrusion detector.
For this project, we will be using python programming language along with two other libraries that are OpenCV and Numpy.
We will start by importing the libraries. If you haven't already installed these libraries you can install them using the pip command. Just open your command prompt, paste the below command, hit enter and sit back, while the libraries are being installed.
pip install OpenCV-python
If the above piece of code doesn’t throw an error, your libraries are installed successfully.
You can either use the camera of your laptop or use some video for this project. The first step will be to capture the video file or start the video, in case you are using your laptop camera. Then we have to define the region which we want to protect. For this, we will create a click_event function. This function will allow you to select a rectangular region in the frame. The user has to first left-click at the point of start L1 and drag the pointer to the end of the portion L2. By default, we will take the whole frame, so, you can leave this parameter if you want by simply pressing any key to continue.
From the above snippet, you can observe that we used OpenCV mouse events to create the region of interest. We used a flag object "draw" that determines the state of the mouse pointer. When the state is 1, it means the user is drawing the region of interest and once he is done, the state comes back to 0 again, allowing the user to recreate the region of interest. We allocated the coordinates of the region of interest as a global variable so that we could use those values in the later section of our code.
We will start reading the frames of the video as you can see in the below snippet of code. We will take two consecutive frames of the video and focus on the portion of the frame or the region of interest that we defined in step 1.
Using an RGB image for this task may not be very helpful and will make the processing slow. So, we will convert these frames to grayscale images. After that, we will see if there is any change in these two consecutive frames. We can calculate these changes by using the absdiff() function of OpenCV. The absdiff() function will return the difference between the two frames. The next step will be to extract the masked image of this different image which will contain either white or black pixels. For that, we can use image thresholding. The white pixels denote the change in the frames whereas the black portion of the image denotes similarity in the two frames. As these changes could be very sharp and might contain several breaks between them, hence they might be hard to detect in some cases. So, we will use some image processing techniques to rectify the problem.
We will use the Gaussian blur technique to smoothen the image and dilate the image to fill the gaps between the white-masked images. After performing the Image processing, our masked image looks as below.
Then we will calculate the area of these individual white segments in the image. To calculate the area of these white segments, we will use the find contours method of OpenCV. This method will extract the boundary points. You can see these contours by using the draw contours function of OpenCV. Further, we can feed these points to the contour area function which will give us the area of each contour. We can ignore minute changes as they might just be noise. For that, we will define a threshold area. If the contour size is less than this threshold area (900 in our case), we will ignore that contour and otherwise. To show that we have detected the intrusion, we can surround the contour with a green bounding box.
We can even use a buzzer to alarm the owner. Consider it as an assignment to implement the buzzer alarm in this project. There are several libraries you can use for that like winsound and beepy. Remember, you can always take help from google!
OpenCV is a great library for computer vision and is helpful in many applications. In this article, we explored one such application of OpenCV that is zone intrusion detection. In brief, the project goes as follows:
Author is a seasoned writer with a reputation for crafting highly engaging, well-researched, and useful content that is widely read by many of today's skilled programmers and developers.