ToM QA Dataset: Evaluating Theory of Mind in Question Answering

ToM QA Dataset: Evaluating Theory of Mind in Question Answering

The ToM QA Dataset is designed to evaluate question-answering models' ability to reason about beliefs. It includes 3 task types and 4 question types, creating 12 total scenarios. The dataset is inspired by theory-of-mind experiments in developmental psychology and is used to test models' understanding of beliefs and inconsistent states of the world.

ToM QA Dataset: Evaluating Theory of Mind in Question Answering

Detaljerad introduktion

The ToM QA Dataset, introduced in the EMNLP 2018 paper 'Evaluating Theory of Mind in Question Answering', provides a comprehensive set of scenarios to test question-answering models. The dataset includes first-order and second-order belief questions, as well as memory and reality questions, to ensure models have a correct understanding of the state of the world and others' beliefs. It is available in four versions: easy with noise, easy without noise, hard with noise, and hard without noise.

Mer
Datasats

SmartFlowAI/EmoLLM: Psychology LLM、LLM、The Big Model of Mental Health、Finetune、InternLM2、InternLM2.5、Qwen、ChatGLM、Baichuan、DeepSeek、Mixtral、LLama3、GLM4、Qwen2
Visa detaljer

SmartFlowAI/EmoLLM: Psychology LLM、LLM、The Big Model of Mental Health、Finetune、InternLM2、InternLM2.5、Qwen、ChatGLM、Baichuan、DeepSeek、Mixtral、LLama3、GLM4、Qwen2

Psychology LLM、LLM、The Big Model of Mental Health、Finetune、InternLM2、InternLM2.5、Qwen、ChatGLM、Baichuan、DeepSeek、Mixtral、LLama3、GLM4、Qwen2 - SmartFlowAI/EmoLLM

Tobii Pro Lab: Eye Tracking Software for Behavior Research
Visa detaljer

Tobii Pro Lab: Eye Tracking Software for Behavior Research

Tobii Pro Lab is a comprehensive eye tracking software designed for behavioral research, offering a complete solution for researchers to conduct experiments from test design to data analysis.

HuggingFaceFW/fineweb-2
Visa detaljer

HuggingFaceFW/fineweb-2

FineWeb-2 is a dataset of over 15 trillion tokens of cleaned and deduplicated English web data from CommonCrawl. This is the second iteration of the popular 🍷 FineWeb dataset, bringing high quality pretraining data to over 1000 🗣️ languages.The 🥂 FineWeb2 dataset is fully reproducible, available under the permissive ODC-By 1.0 license and extensively validated through hundreds of ablation experiments.In particular, on the set of 9 diverse languages we used to guide our processing decisions, 🥂 FineWeb2 outperforms other popular pretraining datasets covering multiple languages (such as CC-100, mC4, CulturaX or HPLT, while being substantially larger) and, in some cases, even performs better than some datasets specifically curated for a single one of these languages, in our diverse set of carefully selected evaluation tasks: FineTasks.

Kategorier

Nyckelord

ToM QA DatasetTheory of MindQuestion AnsweringBelief ReasoningDevelopmental PsychologyEMNLP 2018

Dela