Please use this identifier to cite or link to this item: http://theses.ncl.ac.uk/jspui/handle/10443/3864
Title: Shifting embodied participation in multiparty university student meetings
Authors: Chen, Qi.
Issue Date: 2017
Publisher: Newcastle University
Abstract: Student group work has been used in higher education as an effective means to cultivate students’ work-related skills and cooperative learning. These encounters of small groups are the sites where, through talk and other resources, university students get their educational tasks done as well as acquire essential workplace skills such as problem-solving, team working, decision-making and leadership. However, settings of educational talk-as-work, such as student group meetings, remain under-researched (Stokoe, Benwell, & Attenborough, 2013). The present study therefore attempts to bridge this gap by investigating the professional and academic abilities of university students to participate in multiparty group meetings, drawing upon a dataset of video- and audio-recorded meetings from the Newcastle University Corpus of Academic English (NUCASE). The dataset consists of ten hours of meetings in which a group of naval architecture undergraduate students work cooperatively on their final year project – to design and build a wind turbine. The study applies the methodological approach of conversation analysis (CA) with a multimodal perspective. It presents a fine-detailed, sequential multimodal analysis of a collection of cases of speaker transitions, and reveals how meeting participants display speakership and recipiency with their verbal/vocal and bodily-visual coordination. In this respect, the present study is the first to offer a systematic collection, as well as a thorough investigation, of speaker transition and turn-taking practices from a multimodal perspective, especially with the scope of analysis beyond pre-turn and turn-beginning positions. It shows how speaker transitions through ‘current speaker selects next’ and ‘next speaker self-selects’ are joint-undertakings not only between the self-selecting/current speaker, and the target recipient/addressed next speaker, but also among other co-present participants. Especially, by mobilising the whole set of multimodal resources, participants are able to display their multiple orientations toward their co-participants, project, pursue and accomplish multiple courses of action in concurrence, and intricately coordinate their mutual orientation toward the shifting and emerging participation framework during the transition, establishment and maintenance of the speakership and recipiency. By presenting the data and analysis, this study extends ii boundaries of existing understandings on the temporality, sequentiality and systematicity of multimodal resources in talk-and-bodies-in-interaction. The thesis also contributes to interaction research in the particular context of student group work in higher education contexts, by providing a ‘screenshot’ of students’ academic lives as it unfolds ‘in flight’. Particularly, it reveals how students competently participate in multiparty group meetings (e.g., taking and allocating turns), co-construct the unfolding meeting procedures (e.g., roundtable update discussion), and jointly achieve the local interactional goals (e.g., sharing work progress, reaching an agreement). Acquiring such skills is, as it argues above, not only crucial for accomplishing the educational tasks, but also necessary for preparing university students to fulfill their future workplace expectations. The study therefore further informs the practices of university students and professional practitioners in multiparty meetings, and also draws on methodological implications for multimodal CA research.
Description: PhD Thesis
URI: http://hdl.handle.net/10443/3864
Appears in Collections:School of Education, Communication and Language Sciences

Files in This Item:
File Description SizeFormat 
Chen, Q 2017.pdfThesis49.63 MBAdobe PDFView/Open
dspacelicence.pdfLicence43.82 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.