tartuNLP/Reddit Anhedonia Dataset - hf-mirror

tartuNLP/Reddit Anhedonia Dataset - hf-mirror

tartuNLP/reddit-anhedonia by huggingface-mirror (hf-mirror)

tartuNLP/Reddit Anhedonia Dataset - hf-mirror

Detaillierte Einführung

Focusing on the PRIMATE dataset, our study reveals concerns regarding annotation validity, particularly for the lack of interest or pleasure symptom. Through re-annotation by a mental health professional, we introduce finer labels and textual spans as evidence, identifying a notable number of false positives. Our refined annotations offer a higher-quality test set for anhedonia detection. This study underscores the necessity of addressing annotation quality issues in mental health datasets, advocating for improved methodologies to enhance NLP model reliability in mental health assessments. A mental health professional (MHP) read all the posts in the subset and labelled them for the presence of loss of interest or pleasure (anhedonia). The MHP assigned three labels to each post: a) 'mentioned' if the symptom is talked about in the text, but it is not possible to infer its duration or intensity; b) 'answerable' if there is clear evidence of anhedonia; c) 'writer's symptoms' which shows whether the author of the post discusses themselves or a third person. Additionally, the MHP selected the part of the text that supports the positive label.

Mehr
Datensatz

t4d: Conversion Algorithm from ToMi to T4D Dataset
Details anzeigen

t4d: Conversion Algorithm from ToMi to T4D Dataset

This project implements the conversion algorithm from the ToMi dataset to the T4D (Thinking is for Doing) dataset, as introduced in the paper https://arxiv.org/abs/2310.03051. It filters examples with Theory of Mind (ToM) questions and adapts the algorithm to account for second-order false beliefs.

HeartLink - Empathetic Psychological Model
Details anzeigen

HeartLink - Empathetic Psychological Model

HeartLink is an empathetic psychological model that uses a large language model fine-tuned on a large empathetic Q&A dataset. It can perceive users' emotions and experiences during conversations and provide empathetic responses using rich psychological knowledge, aiming to understand, comfort, and support users. The responses include emoji expressions to bridge the gap with users, offering psychological support and help during consultations.

HuggingFaceFW/fineweb-2
Details anzeigen

HuggingFaceFW/fineweb-2

FineWeb-2 is a dataset of over 15 trillion tokens of cleaned and deduplicated English web data from CommonCrawl. This is the second iteration of the popular 🍷 FineWeb dataset, bringing high quality pretraining data to over 1000 🗣️ languages.The 🥂 FineWeb2 dataset is fully reproducible, available under the permissive ODC-By 1.0 license and extensively validated through hundreds of ablation experiments.In particular, on the set of 9 diverse languages we used to guide our processing decisions, 🥂 FineWeb2 outperforms other popular pretraining datasets covering multiple languages (such as CC-100, mC4, CulturaX or HPLT, while being substantially larger) and, in some cases, even performs better than some datasets specifically curated for a single one of these languages, in our diverse set of carefully selected evaluation tasks: FineTasks.

Kategorien

Schlüsselwörter

Reddit AnhedoniaDatasetHugging FacePRIMATEtartuNLP

Teilen