For Better or Worse: The Impact of Counterfactual Explanations’ Directionality on User Behaviorin xAI

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:xAI (1. : 2023 : Lissabon) Explainable artificial intelligence ; Part 3
1. Verfasser: Kuhl, Ulrike (VerfasserIn)
Weitere Verfasser: Artelt, André (VerfasserIn), Hammer, Barbara (VerfasserIn)
Pages:3
Format: UnknownFormat
Sprache:eng
Veröffentlicht: 2023
Schlagworte:
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Titel Jahr Verfasser
Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification 2023 Tapia, Carlos Gémez
State Graph Based Explanation Approach for Black-Box Time Series Model 2023 Huang, Yiran
Development of a Human-Centred Psychometric Test for the Evaluation of Explanations Produced by XAI Methods 2023 Vilone, Giulia
The Importance of Distrust in AI 2023 Peters, Tobias M.
LUCID-GAN: Conditional Generative Models to Locate Unfairness 2023 Algaba, Andres
Weighted Mutual Information for Out-Of-Distribution Detection 2023 Bernardi, Giacomo De
Opening the Black Box: Analyzing Attention Weights and Hidden States in Pre-trained Language Models for Non-language Tasks 2023 Ballout, Mohamad
From Black Boxes to Conversations: Incorporating XAl in a Conversational Agent 2023 Nguyen, Van Bach
Toward Inclusive Online Environments: Counterfactual-Inspired XAI for Detecting and Interpreting Hateful and Offensive Tweets 2023 Qureshi, Muhammad Deedahwar Mazhar
Causal-Based Spatio-Temporal Graph Neural Networks for Industrial Internet of Things Multivariate Time Series Forecasting 2023 Miraki, Amir
Towards a Comprehensive Human-Centred Evaluation Framework for Explainable Al 2023 Donoso-Guzmán, Ivania
Concept Distillation in Graph Neural Networks 2023 Magister, Lucie Charlotte
Adding Why to What? Analyses of an Everyday Explanation 2023 Terfloth, Lutz
Leveraging Group Contrastive Explanations for Handling Fairness 2023 Castelnovo, Alessandro
Explainable Machine Learning via Argumentation 2023 Prentzas, Nicoletta
Improving Local Fidelity of LIMEby CVAE 2023 Yasui, Daisuke
Integrating GPT-Technologies with Decision Models for Explainability 2023 Goossens, Alexandre
Evaluating Self-attention Interpretability Through Human-Grounded Experimental Protocol 2023 Bhan, Milan
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI 2023 Schlegel, Udo
For Better or Worse: The Impact of Counterfactual Explanations’ Directionality on User Behaviorin xAI 2023 Kuhl, Ulrike
Alle Artikel auflisten