Extracting Self-Consistent Causal Insights from Users Feedback with LLMs and In-context Learning ...

Microsoft Windows Feedback Hub is designed to receive customer feedback on a wide variety of subjects including critical topics such as power and battery. Feedback is one of the most effective ways to have a grasp of users' experience with Windows and its ecosystem. However, the sheer volume of...

Full description

Bibliographic Details
Main Authors: Abdali, Sara, Parikh, Anjali, Lim, Steve, Kiciman, Emre
Format: Article in Journal/Newspaper
Language:unknown
Published: arXiv 2023
Subjects:
DML
Online Access:https://dx.doi.org/10.48550/arxiv.2312.06820
https://arxiv.org/abs/2312.06820
id ftdatacite:10.48550/arxiv.2312.06820
record_format openpolar
spelling ftdatacite:10.48550/arxiv.2312.06820 2024-01-28T10:05:24+01:00 Extracting Self-Consistent Causal Insights from Users Feedback with LLMs and In-context Learning ... Abdali, Sara Parikh, Anjali Lim, Steve Kiciman, Emre 2023 https://dx.doi.org/10.48550/arxiv.2312.06820 https://arxiv.org/abs/2312.06820 unknown arXiv Creative Commons Attribution Non Commercial Share Alike 4.0 International https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode cc-by-nc-sa-4.0 Artificial Intelligence cs.AI Computation and Language cs.CL Machine Learning cs.LG Methodology stat.ME FOS Computer and information sciences Article Preprint CreativeWork article 2023 ftdatacite https://doi.org/10.48550/arxiv.2312.06820 2024-01-04T23:54:53Z Microsoft Windows Feedback Hub is designed to receive customer feedback on a wide variety of subjects including critical topics such as power and battery. Feedback is one of the most effective ways to have a grasp of users' experience with Windows and its ecosystem. However, the sheer volume of feedback received by Feedback Hub makes it immensely challenging to diagnose the actual cause of reported issues. To better understand and triage issues, we leverage Double Machine Learning (DML) to associate users' feedback with telemetry signals. One of the main challenges we face in the DML pipeline is the necessity of domain knowledge for model design (e.g., causal graph), which sometimes is either not available or hard to obtain. In this work, we take advantage of reasoning capabilities in Large Language Models (LLMs) to generate a prior model that which to some extent compensates for the lack of domain knowledge and could be used as a heuristic for measuring feedback informativeness. Our LLM-based approach is ... Article in Journal/Newspaper DML DataCite Metadata Store (German National Library of Science and Technology)
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
topic Artificial Intelligence cs.AI
Computation and Language cs.CL
Machine Learning cs.LG
Methodology stat.ME
FOS Computer and information sciences
spellingShingle Artificial Intelligence cs.AI
Computation and Language cs.CL
Machine Learning cs.LG
Methodology stat.ME
FOS Computer and information sciences
Abdali, Sara
Parikh, Anjali
Lim, Steve
Kiciman, Emre
Extracting Self-Consistent Causal Insights from Users Feedback with LLMs and In-context Learning ...
topic_facet Artificial Intelligence cs.AI
Computation and Language cs.CL
Machine Learning cs.LG
Methodology stat.ME
FOS Computer and information sciences
description Microsoft Windows Feedback Hub is designed to receive customer feedback on a wide variety of subjects including critical topics such as power and battery. Feedback is one of the most effective ways to have a grasp of users' experience with Windows and its ecosystem. However, the sheer volume of feedback received by Feedback Hub makes it immensely challenging to diagnose the actual cause of reported issues. To better understand and triage issues, we leverage Double Machine Learning (DML) to associate users' feedback with telemetry signals. One of the main challenges we face in the DML pipeline is the necessity of domain knowledge for model design (e.g., causal graph), which sometimes is either not available or hard to obtain. In this work, we take advantage of reasoning capabilities in Large Language Models (LLMs) to generate a prior model that which to some extent compensates for the lack of domain knowledge and could be used as a heuristic for measuring feedback informativeness. Our LLM-based approach is ...
format Article in Journal/Newspaper
author Abdali, Sara
Parikh, Anjali
Lim, Steve
Kiciman, Emre
author_facet Abdali, Sara
Parikh, Anjali
Lim, Steve
Kiciman, Emre
author_sort Abdali, Sara
title Extracting Self-Consistent Causal Insights from Users Feedback with LLMs and In-context Learning ...
title_short Extracting Self-Consistent Causal Insights from Users Feedback with LLMs and In-context Learning ...
title_full Extracting Self-Consistent Causal Insights from Users Feedback with LLMs and In-context Learning ...
title_fullStr Extracting Self-Consistent Causal Insights from Users Feedback with LLMs and In-context Learning ...
title_full_unstemmed Extracting Self-Consistent Causal Insights from Users Feedback with LLMs and In-context Learning ...
title_sort extracting self-consistent causal insights from users feedback with llms and in-context learning ...
publisher arXiv
publishDate 2023
url https://dx.doi.org/10.48550/arxiv.2312.06820
https://arxiv.org/abs/2312.06820
genre DML
genre_facet DML
op_rights Creative Commons Attribution Non Commercial Share Alike 4.0 International
https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode
cc-by-nc-sa-4.0
op_doi https://doi.org/10.48550/arxiv.2312.06820
_version_ 1789331677231710208