Psychology Wiki Datasetpsychology_wiki数据集的构建基于心理学领域的英文维基百科内容,通过系统化的数据采集与整理,确保了信息的广泛覆盖与深度挖掘。数据集中的每一篇文章均经过严格的筛选与标注,涵盖了标题、正文、相关性、受欢迎程度及排名等多个维度,为心理学研究提供了丰富的文本资源。
Psychology Wiki Dataset该数据集包含五个特征:标题(title)、文本(text)、相关性(relevans)、流行度(popularity)和排名(ranking),数据类型分别为字符串和浮点数。数据集分为一个训练集,包含989个样本,总大小为12359374字节。数据集的下载大小为6790523字节。
This project implements the conversion algorithm from the ToMi dataset to the T4D (Thinking is for Doing) dataset, as introduced in the paper https://arxiv.org/abs/2310.03051. It filters examples with Theory of Mind (ToM) questions and adapts the algorithm to account for second-order false beliefs.
FineWeb-2 is a dataset of over 15 trillion tokens of cleaned and deduplicated English web data from CommonCrawl. This is the second iteration of the popular 🍷 FineWeb dataset, bringing high quality pretraining data to over 1000 🗣️ languages.The 🥂 FineWeb2 dataset is fully reproducible, available under the permissive ODC-By 1.0 license and extensively validated through hundreds of ablation experiments.In particular, on the set of 9 diverse languages we used to guide our processing decisions, 🥂 FineWeb2 outperforms other popular pretraining datasets covering multiple languages (such as CC-100, mC4, CulturaX or HPLT, while being substantially larger) and, in some cases, even performs better than some datasets specifically curated for a single one of these languages, in our diverse set of carefully selected evaluation tasks: FineTasks.
Psychology LLM、LLM、The Big Model of Mental Health、Finetune、InternLM2、InternLM2.5、Qwen、ChatGLM、Baichuan、DeepSeek、Mixtral、LLama3、GLM4、Qwen2 - SmartFlowAI/EmoLLM