Overview

As digital transformation continues, everyday technologies will fundamentally change: they will become proactive, autonomous and more and more opaque for humans. Taking production management as an example, this research project will examine how a cooperation between humans and algorithmic agents can and ought to be designed. Potential designs will be examined with regard to three potentially competing objectives: performance, satisfaction and accountability. In general, we will create examples of different types of human-algorithm-cooperation and explore their impact on the efficiency and effectiveness of the result, the work satisfaction and wellbeing of the humans involved and societal and regulatory implications. While production management serves as an example, the project will address broader issues of how to design human-algorithm-cooperation between the priorities of industry, workers, and society at large.

Objective

This project is divided into three main research fields. These research fields correspond with three models we want to develop for the design of human and non-human cooperation. Parallel to the development of the individual models, interdependencies are revealed and cross-sectional questions are answered. The aim is to evaluate all three models in a common setting (explorative study). The main concern here is to consider the models not as separate from each other, but as an integral part of the other models. 

Performance model

Production management includes dispositive production factors like planning, monitoring and control. The higher the decision level is (from operative to tactical to strategic), the less algorithmic decision support there is for a human production manager. This is mainly due to decreasing predictability and increasing risk with higher decision levels (Dhar, 2016). In this field of research, we want to test in particular areas where learning algorithms outperform experienced production management and where cooperation is most productive. Given the fact that intelligent systems operate better on a certain domain, but are with little to no use outside of it, we expect machines to be inapt for certain decision tasks. The critical question is the role of the human in this performance-oriented cooperation: Will he / she be degraded to a physical execution of algorithmic decisions (back to Taylorism) or will he perform meta-tasks, such as parameter monitoring (forward to New Work)? Our goal is the technical-organizational optimum of the division of tasks between human and algorithmic agents.

Satisfaction model

Human algorithm cooperation must not only be optimized with regard to performance but also in terms of providing meaningful work to all humans involved. So far, automation fundamentally impacted work setting, leading to profound changes in the emotional and cognitive responses to work. For example, classic automation forced people into supervisory control positions, where work satisfaction is no longer created by participating in the production itself, but by solving complex problems arising from failures of automation Instead of following the automation's notion of completely substituting the human in the process (which mostly fails), the leading model must be to determine meaningful forms of human algorithm cooperation. The overarching question is :How should human algorithm cooperation be designed to lead to fulfilling and meaningful work setting?

The main goal of this research field is to create a better understanding of the experiential costs and potential strategies of modeling interaction with algorithmic agents along the line of human algorithm cooperation. Since algorithms may become an inevitable part of work environment, the impact of their particular embedding into work must not only be scrutinized in terms of functional or efficiency gains, but also in terms of positively or negatively impacting job satisfaction and wellbeing through its use.

Accountability Model

The development and deployment of AI-based decision-making and optimization systems raises a number of epistemological, ethical and social issues. Critical research in the areas of algorithms studies and science and technology studies has voiced concerns in regard to potential biases and discrimination of algorithmic systems in general and AI in particular. While such systems are increasingly built into the fabric of everyday life there are no agreed methods to assess the sustained effects of such applications on human populations. The development of autonomous algorithmic decision systems for production planning and management provides a research context in which theoretical as well as practical approaches can be explored for providing human actors with means to contest and to (partially) control algorithmic decisions by rendering them interpretable, explainable, accountable and transparent. It is thus necessary to design algorithmic agents that can be critically scrutinized in practice. The goal is to gain a better understanding of the situated requirements for the interpretability, explainability and accountability of autonomous algorithmic based decisions in production management and to explore strategies for providing human users with means to monitor, contest and intervene in algorithmic decision-making procedures.