Automatic Evaluation of Bushfire Risk via Acoustic Scene Analysis

Supervisors:

Primary supervisor Dr Saeed Afshar

Description:

The bushfires of 2019-2020 known as Black Summer were the largest fires ever experienced in Australia. They burnt more than 18 million hectares, killed 34 people and a billion animals, and cost the Australian economy billions of dollars.

Effective bushfire pre-emption and management in remote environments requires on the ground assessment of the landscape and ecology, by highly trained experts. Fire risk varies greatly between regions and ecosystems. Different strategies are required to manage fire in the Central Australian deserts than the tall, wet eucalypt forests of Mt Zero-Taravale, or in the tropical savannah woodlands of Cape York and the Kimberley. Traditional fire management by indigenous communities has always relied on community elders to determine where and when controlled burns are ignited. Only those with a life-time of knowledge and experience in the physical ecology of their homeland were ever trusted with the task of fire management.

The skills of a veteran fire management expert, whether they possess the knowledge and tradition of the First Nations people, or that of modern ecology and fire management, are in severely short supply and dwarfed by the continent wide demand that the Black Summer fires demonstrated.

There may, however, be a way to use technology as a force multiplier in our fight against the destructive power of bushfires. Each landscape, each ecosystem by definition is made up of characteristic distributions of different plants and animals. These interconnected distributions, changing with the seasons and in response to local conditions such as temperature, precipitation and wind, carry visual and auditory signatures. Signatures which a fire management expert can associate, through decades of training and experience, to fire risk and management strategy.

While expert humans, are primarily visual animals and use sight to categorize the risk of a surveyed landscape, it is highly likely that the auditory signatures emitted by the distributions of animals in a local environment is just as unique and informative as the visual signatures which inform the human expert.

However, unlike visual processing which is extremely difficult to automate in natural cluttered environments, the Acoustic signatures of animals are incredibly easy for machines to learn and recognize. What’s more these signature are entirely unaffected by clutter. In comparison to vision, auditory scene analysis requires very little power, enabling remote ecological monitoring systems that can operate for long periods on small batteries or indefinitely using a small solar panel.

In this project we propose to train our auditory scene analysis systems to find correlations between the soundscape and the fire risk measures provided by human experts. In a pilot trial, we propose to train our system using the ground truth labels provided by human experts over narrow range of environments and conditions and auditory scene information collected at the same sites. We will then attempt to build systems with the same discriminatory abilities of these experts using auditory signals and attempt to functionally replicate their rare and hard earned knowledge in our machines in a very simple yet effective way.

Outcomes:

The goal of this project is to develop an automated auditory scene analysis system to assess bushfire risk, by learning from the acoustic signatures of ecosystems and expert assessments. This aims to facilitate continuous remote monitoring, thereby enabling faster responses and effective bushfire prevention. This project could greatly enhance our ability to predict and prevent destructive bushfires, thereby protecting both human and animal lives and conserving our precious natural landscapes. This will include the following tasks:

  • Literature Review: A comprehensive study on fire management, ecology of Australian landscapes, and existing research on auditory scene analysis.
  • Data Collection: Recording and documenting acoustic data from various landscapes along with corresponding fire risk assessments from human experts.
  • System Design and Development: Design and develop an auditory scene analysis system that learns from the collected data.
  • System Training: Train the system to recognize acoustic signatures and correlate them to fire risk levels.
  • System Testing and Validation: Validate the system in real-world scenarios to ensure its reliability and accuracy in assessing fire risk.
  • System Optimization: Fine-tune the system for increased accuracy, efficiency, and adaptability.
  • Field Deployment: Deploy the system in selected landscapes and monitor its performance over an extended period.
  • Evaluation and Analysis: Evaluate the system’s performance and analyze the impact of the information provided by it on fire management decisions.
  • Communication and Publication: Document the findings in a format appropriate for scientific publication and present at relevant conferences and workshops.

Eligibility criteria:

In-depth knowledge in Python or C++ for system design, development, and testing. Knowledge of machine learning, specifically in auditory scene analysis, and understanding of ecology and fire management would be beneficial. Additionally, proficiency in data collection and analysis would also be required.