for Information Systems
and Digital Innovation
Photo: UHH/Denstorf
25 June 2024, 10:00 am
Place and Time: Moorweidenstr. 18, Rm 0005.1. Tuesday 25.6.2024 from 10-12 am
Talk 1: Title: Does AI Monitoring Kill Innovation? The Impact of AI Autonomy on Innovative Behavior at Work.
Abstract: Artificial intelligence (AI) monitoring systems are the next frontier of monitoring technology with innovative applications in management that can also induce stress through the system’s ability to make decisions autonomously. Prior literature related to technology-induced stress or technostress always operated under the assumption that the subject of a system is also an active user. For AI monitoring systems this is different. Rather AI’s autonomy gives rise to a new set of stressors rooted in AI’s unique ability to make decisions and act upon them. In a series of experiments, we explore how AI autonomy affects innovative work outcomes and how this relationship is affected by AI stressors. First, we test and establish our research model consisting of four AI-stressors (Study 1). Second, we test the model using a different experimental scenario to increase realism and explore the model’s sensitivity (Study 2). Third, we confirm the overall effect on behavioral outcomes (Study 3). Overall, we find two competing effects, that is, AI delegation stressors and AI deployment stressors. We contribute to the literature on techno-stress and algorithmic management.
Bio: Karl Werder is an Associate Professor at the IT University of Copenhagen. His research interests include the development of digital technology, designing artificial intelligence for innovation, and organizing for digital innovation. His work has been published or is forthcoming in leading journals from information systems (e.g., Journal of Management Information Systems, Journal of the Association for Information Systems, Journal of Strategic Information Systems), software engineering (e.g., IEEE Transactions on Software Engineering, Information & Software Technology), and management (e.g., California Management Review), among others.
Talk 2: Title: The Adverse Generativity Arms Race: A Case Study of an E-Sports Platform Sustaining Legitimacy in the Face of Cheating
Abstract: Platform ventures continually innovate, and this often results in unintended, adverse behavior on the platform – what we refer to as “adverse generativity.” Platform ventures must continually contend with this adverse generativity to build and maintain legitimacy. To explore how platform ventures sustain pragmatic legitimacy in the face of adverse generativity, we report on a case study of a leading esports platform over an 18-year period and how it dealt with increasingly inventive forms of cheating on the platform. Our study suggests that adverse generativity evolves in an arms race, of sorts, and addressing it requires a cumulative effort of sociotechnical actions. We develop a process model of how platform ventures sustain pragmatic legitimacy in the face of adverse generativity by continuously iterating through containment, deterrence, and engagement, and we discuss implications for the literature on digital platforms and organizational legitimacy.
Bio: Julian Lehmann is an assistant professor of Information Systems at Arizona State University. He studies the strategic value of digital technology in startup and incumbent firms.