To save this page as a PDF, click this button and choose the PDF destination.

LE5

17:10 - 18:50 Thursday, 20th June, 2019

PFC/02/025

Track Legislative Studies and Party Politics

Presentation type Panel


LE05 Non-verbal data in legislative studies

Chair

Rebecca Glazier
University of Arkansas at Little Rock, USA

Discussant

Nolan McCarty
Princeton University, USA

459 If you’re happy and you know it, clap your hands: measuring the coalition mood using nonverbal communication in the legislature

Michael Imre1, Alejandro Ecker2, Thomas Meyer3, Wolfgang C. Müller3
1University of Mannheim, Germany. 2MZES, University of Mannheim, Germany. 3University of Vienna, Austria

Abstract

The mood among coalition partners in multiparty governments may vary over time. The enthusiasm and unity after a government formation process may turn into frustration and anger, when the collaboration in government is not working well. Yet, these systematic and time-varying differences between multiparty governments have hardly been taken into account to assess the success and stability of multiparty governments. In this paper, we analyse how the coalition mood, defined as the atmosphere between government parties, affects government stability. We use applause for coalition partners in plenary debates of the legislature as a time-varying and party-specific measure of the coalition mood based on the nonverbal behaviour in the legislature. Based on an automated analysis of over 200,000 plenary debates in Germany (1991-2013) and Austria (2003-2018), we show that 1) the initial level of support for the new coalition government varies significantly across cabinets and that 2) a declining coalition mood indicates higher risk of early government termination.


1231 Facing the Electorate: Computational Approaches to the Study of Nonverbal Communication and Voter Impression Formation

Constantine Boussalis1, Travis Coan2
1Trinity College Dublin, Ireland. 2University of Exeter, United Kingdom

Abstract

Politicians have strong incentives to use their communication to positively impress and persuade voters (McGraw, 2003; Landtsheer et al., 2008). One important question that persists within the fields of political science, communication, and psychology, however, is whether appearance or substance matters more during political campaigns.  To a large extent, this appearance vs. substance question remains open and, crucially, the notion that appearance can in fact effectively sway voter perceptions is consequential for the health of democracy.  This study leverages state-of-the-art advances from the fields of machine learning, computer vision, and computational linguistics to expand our knowledge on how visual, tonal, and nonverbal elements of political speech-making influence how voters construct and maintain impressions of political actors. We rely on video and transcripts from the 4th Republican Party presidential debate held on 10 November 2016, as well as continuous response approval data from a live focus group (n=311; 62,229 reactions), to determine how emotional and stress displays of political candidates influence voter impression formation.


202 Beyond text as data: Computer vision in political research

Dominic Nyhuis1, Lion Behrens2, Thomas Gschwend2, Tobias Ringwald3, Saquib Sarfraz3, Rainer Stiefelhagen3
1Leibniz University Hannover, Germany. 2University of Mannheim, Germany. 3Karlsruhe Institute of Technology, Germany

Abstract

Automated text analyses have become a cornerstone of political research. Following the interest in the ideological leaning and substantive focus of political texts, scholars have since made commendable efforts to tease out ever more subtle information from texts, such as patterns of political conflict. Despite their undeniable value, these analyses often engage with data that is unnecessarily remote from the concept of interest. One example is the study of legislative speech, where crucial information is lost in the transcription. We argue that automated video analyses constitute an important way forward, as video data can pick up more relevant cues. To trace this claim, we analyze video footage from the US Senate in two case studies. The studies take on the issue of dramatization in legislative speech, specifically the dramatization in high- vs. low-stakes debates and interpersonal differences in oratory quality. We leverage state-of-the-art machine learning approaches which incorporate a variety of cues, such as body pose, facial expression and vocal pitch. We evaluate the proposed architecture with manually annotated video footage and assess how textual measures compare to our estimates.