Short Bio
Karn Watcharasupat (she/her) is a PhD student in Music Technology at the Music Informatics Group, Georgia Institute of Technology. Her research interests are broadly in machine learning and signal processing for audio and music applications, with a current emphasis on music source separation. Karn received her Master of Science in Electrical and Computer Engineering from Georgia Tech in 2024, and her Bachelor of Engineering (Highest Honours) in Electrical and Electronic Engineering from Nanyang Technological University, Singapore in 2021. She is currently a Google PhD Fellow in Machine Perception (2024) and an IEEE Signal Processing Society Scholar (2023, 2024), and previously an AAUW International Doctoral Degree Fellow (2023-2024). Karn is a co-inventor of a granted patent in asynchronous array source separation and has published more than 20 peer-reviewed works in international venues, including IEEE Open Journal of Signal Processing, IEEE Signal Processing Letters, and the IEEE International Conference on Acoustics, Speech, and Signal Processing.
Full Bio
Karn Watcharasupat (she/her) is a PhD student in Music Technology at the Music Informatics Group, Georgia Institute of Technology, USA. She is currently a recipient of the Google PhD Fellowship in Machine Perception (2024) and the IEEE Signal Processing Society Scholarship (2023 & 2024). She previously held the 2023-2024 AAUW International Doctoral Degree Fellowship. Her research interests are broadly in machine learning and signal processing for audio and music applications, currently focusing on audio source separation.
Karn received her Master of Science in Electrical and Computer Engineering from Georgia Tech in 2024, and her Bachelor of Engineering (Highest Honors) in Electrical and Electronic Engineering from Nanyang Technological University (NTU), Singapore, in 2021. During her undergraduate studies, she was a recipient of the Nanyang Scholarship under the CN Yang Scholars Programme. She was awarded the Lee Kuan Yew Gold Medal and the Association of Consulting Engineers Singapore Gold Medal for graduating top of her class.
In 2025, she was an Inclusive Audio Technology Intern with Apple, cross-functionally working with the Acoustics Machine Learning and Acoustics User Studies teams. Previously, she worked with Netflix’s Audio Algorithms team in 2023 and 2024, leading the early development of their cinematic audio source separation system. Prior to her doctoral studies, she worked in the NTU Digital Signal Processing and Smart Nation Translational Laboratories (2020-2022), NTU Media Technology Laboratory (2018-2021), and Aevice Health (2020), in addition to a visiting collaboration with the Music Informatics Group (2020-2022).
Karn has authored and co-authored a granted patent on distributed array source separation, a patent application on soundscape augmentation, and more than 25 peer-reviewed publications in venues including the International Society for Music Information Retrieval (ISMIR) Conference, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE Signal Processing Letters, IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), IEEE Open Journal on Signal Processing, and IEEE Transactions on Affective Computing. Her work has utilized signal processing, machine learning, and deep learning across tasks such as audio source separation, music information retrieval, sound event localization and detection, speech enhancement, representation learning, and audio content analysis.
Karn has co-authored a granted patent on distributed array source separation, a patent application on soundscape augmentation, and more than 20 peer-reviewed publications in venues including as the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE Signal Processing Letters, IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), IEEE Open Journal on Signal Processing, and IEEE Transactions on Affective Computing. Her work has utilized signal processing, machine learning, and deep learning across tasks such as audio source separation, music information retrieval, sound event localization and detection, speech enhancement, representation learning, and audio content analysis.