Psychology Wiki Datasetpsychology_wiki数据集的构建基于心理学领域的英文维基百科内容,通过系统化的数据采集与整理,确保了信息的广泛覆盖与深度挖掘。数据集中的每一篇文章均经过严格的筛选与标注,涵盖了标题、正文、相关性、受欢迎程度及排名等多个维度,为心理学研究提供了丰富的文本资源。
Psychology Wiki Dataset该数据集包含五个特征:标题(title)、文本(text)、相关性(relevans)、流行度(popularity)和排名(ranking),数据类型分别为字符串和浮点数。数据集分为一个训练集,包含989个样本,总大小为12359374字节。数据集的下载大小为6790523字节。
This dataset contains 20,000 labelled English tweets of depressed and non-depressed users. The data is collected using the Twitter API and includes feature extraction techniques such as topic modelling and emoji sentiment analysis. It is designed for mental health classification at the tweet level.
The DAIC-WOZ dataset contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post-traumatic stress disorder. This repository provides code for extracting question-level features from the DAIC-WOZ dataset, which can be used for multimodal analysis of depression levels.
HappyDB is a crowd-sourced collection of 100,000 happy moments designed to advance the understanding of happiness through text analysis. The database is publicly available and aims to support research in natural language processing (NLP) and positive psychology. It provides insights into the causes of happiness and suggests sustainable actions for improving well-being.