Skip to content

Acoustic Communication Underwater Mimicking Sea Classification by Multiscale Deep Features Aggregation and Low Complexity, and Data Augmentation.

License

Notifications You must be signed in to change notification settings

aaaastark/Acoustic-Communication-Mimicking-Sea

Repository files navigation

Acoustic-Communication-Mimicking-Sea

Get Started with Relevant Project Implementation

  • If you're looking for assistance with a project implementation that aligns with your needs, feel free to get in touch with us LinkedIn.
  • To get in touch with us and discuss your project implementation needs, please send an email to 4444stark@gmail.com.
  • Thank you for considering our services. We look forward to working with you!

Abstract

The novel use of machine learning (ML) in the field of underwater acoustic communication (UAC) is explored in this research study, with an emphasis on the categorization of various underwater noises. Designing a communication system with the ability to successfully send information while evading detection is the major goal to increase the security and effectiveness of underwater communication. The study explores the difficulties brought forth by the complex undersea environment and describes how ML may be used to get around these constraints. It gives a thorough analysis of several machine learning (ML) methods with a focus on Deep Learning (DL) and how well they simulate the underwater audio channel. The study's results show how machine learning may improve UAC system performance, enabling safer and more effective underwater communication. To categorize different forms of natural noise present in the depths of aquatic ecosystems, a range of machine learning methods, including Convolutional Neural Network (CNN) and Multi Scale Deep Features Aggregation (MSDFA), are used in this study. The significance of these discoveries are discussed in the paper's conclusion, and it offers suggestions for further study in this fascinating area.

Classification Accuracies of Different Methods

Image 1 Illustration provides a graphical representation of the data visualization associated with the various machine learning models.
Image 2 Illustration show comparison of different machine learning Models.

Confusion Metrics of Method

Image 1 Image 2 Image 2 Image 2 The heat map of the confusion matrix MSDFA, DenseNet, CNN, and Residual network method.

Motivation

The motivation for "Convert Underwater Acoustic communication mimicking sea natural noise of environment using multi-scale deep features aggregation convolutional neural network". Using the multi-scale deep feature is expected to be effective for representing characteristic differences among different acoustic communication mimicking sea natural noise and thus to obtain a better result compared to single-scale deep feature and hand-crafted features. Mel-frequency Cepstral Coefficient (MFCC) is adopted as the input of CNNs for extracting multi-scale deep features due to its excellent performance in most audio signal processing tasks. CNN is used to produce multi-scale deep features because of its powerful ability to learn discriminative information from its input data by using some effective operations, such as convolution, pooling, and so on.

Dataset

We used a particular dataset for our study, the Megaptera Novaeangliae Humpback Whale dataset. Due to its extensive collection of acoustic waves connected to the Humpback Whale species, this dataset is especially noteworthy since it offers a rich supply of data for our machine learning model. The dataset contains a wide range of unique classes and traits that were essential to the experimental procedure described in this study. These classifications stand in for the various geographic regions where the audio data were gathered.

Representation of Humpback whale sounds in St. David's Island, Bermuda region

>Particularly, these regions include those that are between one and five miles north of Tortola, as well as Argus Island, Castle Harbour, and David's Island. These places each have distinctive acoustic features that add to the dataset's variety and comprehensiveness. This variety is essential for building a strong machine learning model since it enables the model to generalize from a variety of auditory data.

Representation of Humpback whale sounds in Argus Island, Bermuda region

>We were able to advance the area of Underwater Acoustic Communication (UAC) by using this dataset to train our machine learning model to categorize various underwater noises. This study report goes into great depth about the outcomes of these studies as well as the ramifications of our conclusions.

Representation of Humpback whale sounds in Castle Harbour, Bermuda region Representation of Humpback whale sounds in 1-1.5 miles North of Tortola, British Virgin Islands region