The mood among coalition partners in multiparty governments may vary over time. The enthusiasm and unity after a government formation process may turn into frustration and anger, when the collaboration in government is not working well. Yet, these systematic and time-varying differences between multiparty governments have hardly been taken into account to assess the success and stability of multiparty governments. In this paper, we analyse how the coalition mood, defined as the atmosphere between government parties, affects government stability. We use applause for coalition partners in plenary debates of the legislature as a time-varying and party-specific measure of the coalition mood based on the nonverbal behaviour in the legislature. Based on an automated analysis of over 200,000 plenary debates in Germany (1991-2013) and Austria (2003-2018), we show that 1) the initial level of support for the new coalition government varies significantly across cabinets and that 2) a declining coalition mood indicates higher risk of early government termination.
Politicians have strong incentives to use their communication to positively impress and persuade voters (McGraw, 2003; Landtsheer et al., 2008). One important question that persists within the fields of political science, communication, and psychology, however, is whether appearance or substance matters more during political campaigns. To a large extent, this appearance vs. substance question remains open and, crucially, the notion that appearance can in fact effectively sway voter perceptions is consequential for the health of democracy. This study leverages state-of-the-art advances from the fields of machine learning, computer vision, and computational linguistics to expand our knowledge on how visual, tonal, and nonverbal elements of political speech-making influence how voters construct and maintain impressions of political actors. We rely on video and transcripts from the 4th Republican Party presidential debate held on 10 November 2016, as well as continuous response approval data from a live focus group (n=311; 62,229 reactions), to determine how emotional and stress displays of political candidates influence voter impression formation.
Automated text analyses have become a cornerstone of political research. Following the interest in the ideological leaning and substantive focus of political texts, scholars have since made commendable efforts to tease out ever more subtle information from texts, such as patterns of political conflict. Despite their undeniable value, these analyses often engage with data that is unnecessarily remote from the concept of interest. One example is the study of legislative speech, where crucial information is lost in the transcription. We argue that automated video analyses constitute an important way forward, as video data can pick up more relevant cues. To trace this claim, we analyze video footage from the US Senate in two case studies. The studies take on the issue of dramatization in legislative speech, specifically the dramatization in high- vs. low-stakes debates and interpersonal differences in oratory quality. We leverage state-of-the-art machine learning approaches which incorporate a variety of cues, such as body pose, facial expression and vocal pitch. We evaluate the proposed architecture with manually annotated video footage and assess how textual measures compare to our estimates.