Testing of the YOLOv5 neural network model that has been trained.
This script will guide you through the steps to test and evaluate the accuracy of the trained model. More specifically, it prepares the padded png image for manual labelling, segments the labelled image, runs the detection on the same non-labelled image and backward annotates it, compares your manual annotation to the detected annotation, and renders a confusion matrix to evaluate the model performance.
1. Paths definition and set-up
Show the code
import osimport shutilimport yaml import argparseimport os.path as pathimport scipy.clusterimport scipy.spatialimport jsonimport sysimport subprocessimport numpy as npimport pandas as pdimport scipyimport randomfrom PIL import Image, ImageDrawimport matplotlib.pyplot as pltfrom sklearn.metrics import ConfusionMatrixDisplayfrom tqdm import tqdmimport torchimport statimport shutilfrom datetime import datetimefrom counting_wh.wh_utils.config import cfgfrom counting_wh.wh_utils import image_cutting_support as icsfrom counting_wh.wh_utils import heatmap as hmimport counting_wh.wh_utils.classifier# Add the project root to sys.path (adjust as needed)sys.path.append(os.path.abspath("Waterholes_project/WaterholeDetection_UN-Handbook"))
c:\Users\fossatia\AppData\Local\miniconda3\envs\Boats\lib\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
YOLOv5 v7.0-394-g86fd1ab2 Python-3.10.16 torch-1.12.1+cu113 CUDA:0 (GeForce GTX 1080, 8192MiB)
Fusing layers...
Model summary: 157 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
Adding AutoShape...
2. Preparation of the padded PNG image
Directly from the tif file, this step prepares the padded png image for future steps.
Annotate manually the created png with labelme as in training step. Refer to the previous tab for detailed instructions on how to use labelme. Make sure to use the same categories as the training step. This will allow us to compare my annotation to the detection of the trained model i.e. test the model.
4. Segmentation of the annotated image
Once the manual annotation is done, we can apply the segmentation. Note that this segmentation doesn’t split 20% of the images for the model validation, meaning all segmented images will be used for the testing.
Show the code
#run segmentation without spliting 80% of the images for validation!import counting_wh.wh_utils.testingcounting_wh.wh_utils.testing.segment(r"Waterholes_project/WaterholeDetection_UN-Handbook/testing", "config_test_Drive_UN.yaml")
Cropping Image: D:/Waterholes_project/counting_waterholes/testing\./pngs\20240204_mimal_test.png
[12064 14976 3]
We will have: 15933 images maximum
0% of images without labels will be removed
Skipped 728 images
Empty 0 images
Cropping Image: D:/Waterholes_project/counting_waterholes/testing\./pngs\20240324_mimal_test.png
[12064 14976 3]
We will have: 15933 images maximum
0% of images without labels will be removed
Skipped 1676 images
Empty 0 images
Cropping Image: D:/Waterholes_project/counting_waterholes/testing\./pngs\20240415_mimal_test.png
[12064 14976 3]
We will have: 15933 images maximum
0% of images without labels will be removed
Skipped 790 images
Empty 0 images
Segregating by day...
01_01_2024
04_02_2024
24_03_2024
15_04_2024
Segregating by image...
Segregating by image...
Segregating by image...
Segregating by image...
Segregating by day...
01_01_2024
04_02_2024
24_03_2024
15_04_2024
Segregating by image...
Segregating by image...
Segregating by image...
Segregating by image...
4. Waterholes detection with the trained model
Using those segmented labelled images, we can run the detection of waterholes using the trained model, and compare my manual label with the detection of the model.
Debugging section to recognise and work well with the GPU:
Note to user: If you are running on a GPU, you need to update the torchvision to match the cuda (GPU) version. Using the ‘nvidia-smi’ command, you get your cuda version (in my case: 11) so I need to get a version of torch and torchaudio as 11.xx.
Need to ‘pip uninstall torch torchvision’, and then install the correct version, in my case: ‘pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 –extra-index-url https://download.pytorch.org/whl/cu113’
Other version to be found on this website: https://pytorch.org/get-started/previous-versions/
The testing.segment function and run_detection work to use the segmented images folders grouped per date. Left as it is for now but just something to bear in mind!
Show the code
#check of the GPU found or not?torch.cuda.is_available()
True
The GPU is found, so we can proceed with the detection:
Show the code
#run detection on my testing image set:import counting_wh.wh_utils.testingcounting_wh.wh_utils.testing.run_detection(r"C:/Users/adria/OneDrive - AdrianoFossati/Documents/MASTER Australia/RA/Waterholes_project/WaterholeDetection_UN-Handbook/testing", "config_test_Drive_UN.yaml")
Copy of the order command that is run run to yolo cmd automatically by the function run_detection. Just for you to check and have the information in order to debug or adapt to your computational power. Do not run.
Label directory D:/Waterholes_project/counting_waterholes/testing_v3\./labels\05_02_2025\050225_NT does not exist, skipping image...
raw_images folder D:/Waterholes_project/counting_waterholes/testing_v3\./raw_images
Could not parse date from 050225_NT.csv
7. Confusion matrix
Create the confusion matrix which summaries the performance of the model.