Wyniki wyszukiwania

Filtruj wyniki

  • Czasopisma
  • Data

Wyniki wyszukiwania

Wyników: 1
Wyników na stronie: 25 50 75
Sortuj wg:

Abstrakt

Due to its relevant real-life applications, the recognition of emotions from speech signals constitutes a popular research topic. In the traditional methods applied for speech emotion recognition, audio features are typically aggregated using a fixed-duration time window, potentially discarding information conveyed by speech at various signal durations. By contrast, in the proposed method, audio features are aggregated simultaneously using time windows of different lengths (a multi-time-scale approach), hence, potentially better utilizing information carried at phonemic, syllabic, and prosodic levels compared to the traditional approach. A genetic algorithm is employed to optimize the feature extraction procedure. The features aggregated at different time windows are subsequently classified by an ensemble of support vector machine (SVM) classifiers. To enhance the generalization property of the method, a data augmentation technique based on pitch shifting and time stretching is applied. According to the obtained results, the developed method outperforms the traditional one for the selected datasets, demonstrating the benefits of using a multi-time-scale approach to feature aggregation.
Przejdź do artykułu

Autorzy i Afiliacje

Antonina Stefanowska
1
Sławomir K. Zielinski
1

  1. Faculty of Computer Science, Białystok University of Technology Białystok, Poland

Ta strona wykorzystuje pliki 'cookies'. Więcej informacji