Researchers from North China University of Technology developed a damaged building assessment method using single-shot multibox detector (SSD) algorithm
It is crucial to rapidly identify the location of damaged buildings during search and damage relief after disasters occur. Use of drones to capture stable and clear images in aerial photography is gaining major traction with the rapid development of unmanned aerial vehicle (UAV) and remote sensing technologies. Aerial images offer wider field view compared to ground search and rescue and eliminates potential safety risk involved in ground search and rescue. Large false and missed detections could occur when a manually evaluated image is used to evaluate a damaged area. This is owing to the impact of subjective human factors. Therefore, the processing of aerial images to recognize and evaluate the degree of damage in an area is a challenging task.
Now, a team of researchers from North China University of Technology developed a data expansion SSD algorithm for a small data set of Hurricane Sandy. The team used VGG16 (also called OxfordNet) – a convolutional neural network architecture named after the Visual Geometry Group. The VGG16 convolution autoencoder was trained using the hurricane scenario, in which the weight of the coding part was changed with the weight of the VGG16 network in the SSD model. The team found that the pre-training method enhanced various indicators of the model by around 10%. Moreover, the detection accuracy can be effectively increased through data expansion and the weighted harmonic average of precision and recall and average precision increased by around 20% and 72%, respectively. The rate of false detections was also reduced.
Furthermore, the team used Gaussian noise and Gaussian blur that enhanced the adaptability of the model in complex scenes. To verify the method, the dataset of Hurricane Irma was also used. The research was based on Hurricane Sandy’s post-disaster building damage detection. In further research, the team is expected to collect data from other post-disaster scenarios for training. Moreover, the algorithm can also be installed on the camera of UAVs for real-time detection, which in turn can help the rescue staff in search and rescue to reduce casualties. The research was published in the journal MDPI Applied Science on March 18, 2019.
Subscribe to our newsletter to get notification about new updates,information, discount, etc..