扫描二维码关注“博士后招聘网”微信订阅号或微信搜一搜“博士后招聘网”关注我们。
当前位置: 博士后招聘网 > 国外博士后招聘 > 爱尔兰都柏林圣三一学院2021年招聘博士后职位(多模式交互)

爱尔兰都柏林圣三一学院2021年招聘博士后职位(多模式交互)

信息来源:未知 | 作者:admin | 时间:2021-03-17 10:50

【简介】博士后招聘网整理分享“爱尔兰都柏林圣三一学院2021年招聘博士后职位(多模式交互)”,浏览查询更多博士后招聘计划请访问博士后招聘网

爱尔兰都柏林圣三一学院2021年招聘博士后职位(多模式交互)

Research Fellow in Multimodal Interaction

Trinity College Dublin

Description

Post Summary

The Science Foundation Ireland ADAPT Research Centre (adaptcentre.ie), seeks to appoint a Research Fellow in Multimodal Interaction. The successful candidate will support research in online interaction in teaching scenarios, in the context of the recently funded SFI Covid 19 Rapid Response Project, RoomReader led by Prof. Naomi Harte in TCD and Prof. Ben Cowan in UCD. The candidate will be working with a team to drive research into multimodal cues of engagement in online teaching scenarios. The work involves a collaboration with Microsoft Research Cambridge, and Microsoft Ireland.

The candidate should have extensive experience in speech based interaction, and modelling approaches using deep learning with multimodal signals e.g. linguistic, audio, and visual cues. The candidate will also be responsible for supporting research in a number of areas including:

· Identifying and understanding multimodal cues of engagement in speech based interaction

· Deep learning architectures for multimodal modelling of engagement in speech interactions

· Application and evaluation of modelling approaches to the specific case of online teaching scenarios

Thus, the ideal candidate will typically have specific expertise in speech interaction, signal processing and deep learning. Reporting to a Principal Investigator, the successful candidate will work within a larger group of Postdoctoral Researchers, PhD students and Software Developers. They will have exposure to all aspects of project lifecycle, from requirements analysis to design, code, test and face-to-face demonstrations, including with our industry partners Microsoft Research and Microsoft Ireland.

The successful candidate will work alongside the best and brightest talent in speech and language technologies, and video processing in the Sigmedia Research Group on a day-to-day basis. The wider ADAPT Research centre will give exposure to a wider range of technologies including data analytics, adaptivity, personalisation, interoperability, translation, localisation and information retrieval. As a university-based research centre, ADAPT also strongly supports continuous professional development and education. In this role you will develop as an researcher, both technically and scientifically. In addition, ADAPT will support candidates to enhance their confidence, leadership skills and communication abilities.

Standard Duties and Responsibilities of the Post

· Identify and analyse research papers in online human interaction scenarios, specifically those relevant to online teaching

· Identify existing datasets suitable for baseline analysis of multimodal interaction

Support the design and capture of new multimodal data corpus (actual task is conducted by a Research Assistant on the project)

· Develop and adapt deep learning architectures to multimodal interaction scenarios, subsequently adapting the approaches to the specifics of online teaching interactions

· Liaise with engineering and HCI experts to refine and influence approaches to the project at all levels

Report regularly to the PI of the project, and interact regularly with other team members to maintain momentum in the project

· Dataset recording and subsequent editing and labelling for project deployment

Publish and present results from the project in leading journals and conferences

Funding Information

The position is funded through the SFI COVID-19 Research Call 2020.

· Person Specification

The successful candidate will have broad experience in deep learning architectures applied to speech-based interaction. The successful candidate is expected to:

· Have a thorough understanding of speech based interaction, including linguistic, verbal, non-verbal and visual cues

· Be expert in deep-learning applied to speech processing

· Be skilled at taking disparate research ideas and draw innovative conclusions or see new solutions

· Have excellent interpersonal skills

· Be highly organised in their work, with an ability to work remotely if necessary

Qualifications

Candidates appointed to this role must have a PhD in Engineering or Computer Science, or a closely related field

Knowledge & Experience

Essential

· Understanding of multimodal cues in speech based interaction

Experience of the development of deep learning architectures for speech processing

Familiarity with running of large scale experiments e.g. on a high-performance compute farm

Publication track record commensurate with career stage in high quality conferences or journals

Desirable

Familiarity with MS Teams environment

Experience in post-production tools for video editing

Mentoring of junior team members

Record of open source publishing of code

Skills & Competencies

· Excellent written and oral proficiency in English (essential)

· Good communication and interpersonal skills both written and verbal

· Proven ability to prioritise workload and work to exacting deadlines

· Flexible and adaptable in responding to stakeholder needs

Enthusiastic and structured approach to research and development

Excellent problem-solving abilities

Desire to learn about new products, technologies and keep abreast of new product technical and research developments

Sigmedia Research Group

The Signal Processing and Media Applications (aka Sigmedia) Group was founded in 1998 in Trinity College Dublin. Originally with a focus on video and image processing, the group today spans research in areas across all aspects of media – video, images, speech and audio. Prof. Naomi Harte leads the Sigmedia research endeavours in human speech communication. The group has active research in audio-visual speech recognition, evaluation of speech synthesis, multimodal cues in human conversation, and birdsong analysis. The group is interested in all aspect of human interaction, centred on speech. Much of our work is underpinned by signal processing and machine learning, but we also have researchers grounded in linguistic and psychology aspects of speech processing to keep us grounded.

更多最新博士后招收信息请关注博士后招聘网微信公众号(ID:boshihoujob)

请您在邮件申请时在标题注明信息来自:博士后招聘网-boshihoujob.com,电话咨询时说明从博士后招聘网(www.boshihoujob.com)看到的博士后招聘信息。

声明:凡本网注明“来源:XXX”的文/图等稿件,本网转载出于传递更多信息及方便产业探讨之目的,并不意味着本站赞同其观点或证实其内容的真实性,文章内容仅供参考。如其他媒体、网站或个人从本网站转载使用,须保留本网站注明的“来源”,并自负版权等法律责任。作者如果不希望被转载或者联系转载等事宜,请与我们联系。邮箱:boshihoujob@163.com。

博士后招聘网微信公众号

博士后招聘网微信公众号

扫描二维码关注公众号,ID:boshihoujob

发布博士后招聘信息 加入博士人才库

博士&博士后社群

  • 博士后招聘1号群
    799173148

  • 博士后招聘2号群
    373726562

  • 哲学博士群
    934079716

  • 经济学博士群
    945762011

  • 法学博士群
    934096817

  • 教育学博士群
    934118244

  • 文学博士群
    934106321

  • 历史学博士群
    945803407

  • 理学博士群
    934102752

  • 工学博士群
    945827064

  • 农学博士群
    114347294

  • 医学博士群
    729811942

  • 管理学博士群
    797229360

Copyright©2018-2023 博士后招聘网(boshihoujob.com) 版权所有 皖ICP备18007485号-1 皖公网安备 34070202000340号

本网站所有资讯内容、广告信息,未经书面同意,不得转载。

博士后招聘网(www.boshihoujob.com)专注服务于海内外博士后研究人员。

博士后招收信息发布请联系邮箱boshihoujob@163.com,QQ:878065319,微信号:bshjob001。
联系时请注明单位名称(如:单位名称+博士后招收信息发布)。