Panel #5 - DISCUSSING PERFORMANCE, SATISFACTION AND ACCOUNTABILITY – AN INTERACTIVE WORKSHOP

Four fruitful panels already took place in 2020 and 2021. Panel #1 made the concept of ‘performance’ from a transdisciplinary perspective a subject of discussion by determining various understandings of performance in different disciplines. Panel #2 focused on the question how the collaboration between human and technology should be designed to maintain or increase human satisfaction. Panel #3 focused on the way and methods to make sensor technologies interpretable and explainable. The topic of Panel #4 was the anticipation of future sensor technologies and autonomous systems as well as the futures anticipated by sensor media. These panels were used as a basis for Panel #5 which was executed as an interactive workshop.

Panel #5 took place in March 2022 and was moderated by Prof. Dr. Giuseppe Strina from the Chair of Service Development in SMEs and Crafts. The event was planned and executed by the team of Giuseppe Strina. The main goal of the event was to go deeper into the linking between the different objectives and to further determine possible research collaborations within the University of Siegen. First, Giuseppe Strina introduced everyone to the main objectives and the structure of the workshop. Afterwards, a brief summary of the last workshops’ findings was given. To let everyone, participate in the workshop, a Mural Board with different Zoom breakout sessions was created stimulating a lively 30-minute discussion between the discussants from various disciplines. The discussion was based on prepared intersecting theses, but also own theses could be created. The goal of the discussion was to identify potential research questions and paper topics. This was further stimulated by offering a radar chart as a relevance triangle with the three dimensions “performance”, “satisfaction”, and “accountability” to rank their importance concerning the created paper topics. After this group discussions the results were presented and discussed in the plenum.

Two focus groups were built for the discussion.

The first focus group, which was moderated by Tobias Schmallenbach, firstly selected five of the prepared theses, affecting all three objectives “performance”, “satisfaction” and “accountability”. Quickly the focus was on the addition of context orientation and learn effect. It was discussed about the high importance of analysing the negotiation phase to satisfactory and successfully adapt the design and usability of technology. It was stated that the “accountability” cannot always be fully clarified in advance as it evolves in the process. Due to this, it might be relevant to integrate the question regarding “accountability” and contextual factors already in the formation process of the technology. It was differentiated between possibly predefined aspects and aspects which should be taken into the daily situated negotiation space of the involved parties. Based on their discussion, the first group formulated the following research question “How can AI be concretely designed to enable negotiation situations?”. Based on this research question the following possible paper topic was defined: “Conceptual framework of learning technologies in terms of accountability”. In the radar chart the focus was set more on ´performance´ than the other two dimensions. For the formulated research questions “How can learning technologies be made accountable?” and “How can the use of AI be designed so that decisions appear transparent and comprehensible to the user and learning success is evident for the user?” three paper topics were defined: “Social forms of scrutiny or questioning of technologies”, “Participatory Negotiations of Contradicting Values in Design” and “Designing for Contestability: How to account for unseen problems”. In the radar chart the focus was set on “satisfaction” and “accountability”.

The second focus group was moderated by Julian Ruf. While the given statements were a good entry into the discussion, the participants quickly decided to add their own view on the topic. One perspective which was highlighted, is “the necessity to highlight the past and take history into consideration”. What changed e.g., during the industrial revolution and what can we learn from the past? Furthermore, it was clarified, that one of the first acts should be the identification of the different actors, parties or in general stakeholders needs to be clarified, which at least in Germany, is also a highly juridical question. “Will AIs be their own corporate body? Which role will the German government play? Will Germany be suspended by other countries due to its social compatibility?”. Based on these raised questions, 2 research questions were formulated. First: “When talking about data and algorithms, how do you deal with the expectation that actors do NOT want to be held accountable (Corona as paradigm / climate protection?).” Second: “Is there a conflict of goals between social compatibility (duties of care) and technological development in the world (Germany)? A question of values? (Performance)”

Both, the discussion and the questions lead to the following paper topics. 1. “AI Usage in Organizations – The Phenomenon of Accountability rejection” and 2. “Average People don’t exist: But Averages are based on common goals – the role of data and algorithms.” The first paper topic focuses more on performance and accountability, while the second is focused more on a balance of all three aspects, performance, accountability, and satisfaction.

We want to thank all participants for the collaboration and fruitful discussions.

In Panel #5 it was further clarified that there are several interdependencies between the different dimensions and further factors to consider. In the discussion the focus was set on the importance of the negotiation and design process of technologies. Furthermore, it was discussed that the question about the accountability affects several areas when talking about data and algorithms. The subsequent panel is also planned as an interactive workshop to identify concrete transdisciplinary collaborations among the University of Siegen. In the future panels, we aim to explore questions such as:

- How can learning algorithms be made accountable?

- How to deal with the expectation of involved parties that they do not want to be held accountable when operating with learning algorithms?

- How can transdisciplinary research realize the full potential between research and practice experts?

- How can fruitful concrete collaborations be built among the University of Siegen which combine common research foci for common ground as well as contrasting views for valuable research findings?


Group 1 participants:

  • Tobias Schmallenbach (Fak. III)

  • Giueseppe Strina (Fak. III)

  • Markus Burkhardt (Fak. I)

  • Claudia Müller (Fak. III)

  • Carolin Gerlitz (Fak. I)

  • Tim Weiler (Fak. I)

  • Marc Hassenzahl (Fak. III)

  • Daria Huge Siwe Huwe (Fak. III)

  • Christophe Said (Fak. III)

Group 2 participants:

  • Philipp Julian Ruf (Fak. III)

  • Kevin Krause (Fak. III)

  • Sven Wolff (Fak. III)

  • Beatrice Ernst (Fak. III)

  • Marc Goerigk (Fak. III)

  • Andreas Kolb (Fak. IV)

  • Erhard Schüttpelz (Fak. I)

  • Matthias Vogel (Fak. III)

  • Daniela Mysliwietz-Fleiß (Fak. I)


Research Questions

• How can AI be concretely designed to enable negotiation situations?

• How can learning technologies be made accountable?

• How can the use of AI be designed so that decisions appear transparent and comprehensible to the user and learning success is evident for the user?

Paper Topics

• Conceptual framework of learning

technologies in terms of accountability

• Social forms of scrutiny or questioning of technologies

• Participatory Negotiations of Contradicting Values in Design

• Designing for Contestability: How to account for unseen problems


Research Questions

• When talking about data and algorithms, how do you deal with the expectation that actors do NOT want to be held accountable (Corona as paradigm / climate protection?).

• Is there a conflict of goals between social compatibility (duties of care) and technological development in the world (Germany)? A question of values? (Performance)

Paper Topics

• AI Usage in Organizations – The Phenomenon of Accountability rejection

• Average People don’t exist: But Averages are based on common goals – the role of data and algorithms.

Report written by G.Strina, P.J.Ruf, D. Huge Siwe Huwe.

Next
Next

Panel #3 - Accountability from a transdisciplinary perspective