Sponsors

Organizer

Co-Organizer

Prof. Wenwu Wang

Biograph: Wenwu Wang is a Professor in Signal Processing and Machine Learning, and a Co-Director of the Machine Audition Lab within the Centre for Vision Speech and Signal Processing, University of Surrey, UK. He is also a Guest Professor at Qingdao University of Science and Technology, China. 


He received the B.Sc. degree in 1997, the M.E. degree in 2000, and the Ph.D. degree in 2002, all from the College of Automation, Harbin Engineering University, China. He worked in King’s College London (2002-2003), Cardiff University (2004-2005), Tao Group Ltd. (now Antix Labs Ltd.) (2005-2006), Creative Labs (2006-2007), and University of Surrey (since May 2007). He was a Visiting Scholar at Ohio State University, USA, in 2008. His current research interests include blind signal processing, sparse signal processing, audio-visual signal processing, machine learning and perception, artificial intelligence, machine audition (listening), and statistical anomaly detection. He has (co)-authored over 250 publications in these areas. 


He and his team have won the Reproducible System Award on DCASE 2019, Best Student Paper Award on LVA/ICA 2018, the Best Oral Presentation on FSDM 2016, the Top-Quality Paper Award in IEEE ICME 2015, Best Student Paper Award finalists on ICASSP 2019 and LVA/ICA 2010. He and his team have achieved the 1st place (among 35 submitted systems) in the 2017 DCASE Challenge on "Large-scale weakly supervised sound event detection for smart cars", the 3rd place (among 558 submissions) on the 2018 Kaggle Challenge on "Freesound General-Purpose Audio Tagging", the TVB Europe Award for Best Achievement in Sound in 2016, the finalist for GooglePlay Best VR Experience in 2017, and the Best Solution Award on the Dstl Challenge "Under-sampled Signal Recognition" in 2012. 


He has been a Senior Area Editor (2019-) and Associate Editor (2014-2018) for IEEE Transactions on Signal Processing. He is an Associate Editor (2019-) for EURASIP Journal on Audio Speech and Music Processing. He was a Publication Co-Chair for ICASSP 2019, Brighton, UK, and will serve as Tutorial Chair for ICASSP 2024, Seoul, South Korea. He also serves as a Member (2019-) of the International Steering Committee of Latent Variable Analysis and Signal Separation. 

More information on his personal page:
http://personal.ee.surrey.ac.uk/Personal/W.Wang/


Title:Deep Learning for Audio Classification

Abstract: Audio classification (e.g. audio scene analysis, audio event detection and audio tagging) have a variety of potential applications in security surveillance, intelligent sensing for smart homes and cities, multimedia search and retrieval, and healthcare. This research area is under rapid development recently, having attracted increasing interest from both academia and industrialists. In this talk, we will present some recent and new development for several challenges related to this topic, including data challenges (e.g. DCASE challenges), acoustic modelling, feature learning, dealing with weakly labelled data, and learning with noisy labels. We will show some latest results of our proposed algorithms, such as the attention neural network algorithms for learning with weakly labelled data, and their results on AudioSet – a large scale dataset provided by Google, as compared with several baseline methods. We will also use some sound demos to illustrate the potentials of our proposed algorithms.

 <<Back

Important dates

Paper Submission Deadline:
30 April 2019
Extend to 31 May 2019
Paper Acceptance Notification:
15 July 2019
Camera-ready Paper Submission:
15 August 2019
Registration open date:
1 July 2019
Conference Date:
11-13 December, 2019

Remaining days till

IEEE ICSIDP 2019

Days

© Copyright 2018-2019 IEEE ICSIDP 2019