Every veteran knows and has had a 'Gunny': Semper Fidelis. This dataset is designed for conversational AI systems to assist veterans from various military branches, including U.S. and U.K. armed forces.
Every veteran knows and has had a 'Gunny': Semper Fidelis. This dataset is designed for conversational AI systems to assist veterans from various military branches, including U.S. and U.K. armed forces. The dataset uses multiple personas from different branches (9) to be exact, each dedicated to providing support for veterans dealing with PTSD and transitioning to civilian life. The personas offer advice rooted in discipline, accountability, and mental resilience, while maintaining the appropriate tone and ethos of each military branch. Each persona emphasizes the importance of seeking professional help when necessary, without substituting for therapy, but there is no guarentee. All data was generated using Meta's - Llama-3.2-3B-Instruct.
The Weibo User Depression Detection Dataset is a large-scale dataset for detecting depression in Weibo users. It includes user profiles, tweets, and labels indicating whether the user is depressed. The dataset is useful for researchers working on mental health and social media analysis.
Psy-Insight is a bilingual, interpretable multi-turn dataset for mental health counseling dialogues. It includes 6,208 rounds of multi-turn counseling dialogues in English and 5,776 rounds in Chinese, annotated with step-by-step reasoning labels and multi-task labels. This dataset is designed to support the application of large language models in mental health and is suitable for tasks such as emotion classification and psychological treatment interpretation.
FineWeb-2 is a dataset of over 15 trillion tokens of cleaned and deduplicated English web data from CommonCrawl. This is the second iteration of the popular 🍷 FineWeb dataset, bringing high quality pretraining data to over 1000 🗣️ languages.The 🥂 FineWeb2 dataset is fully reproducible, available under the permissive ODC-By 1.0 license and extensively validated through hundreds of ablation experiments.In particular, on the set of 9 diverse languages we used to guide our processing decisions, 🥂 FineWeb2 outperforms other popular pretraining datasets covering multiple languages (such as CC-100, mC4, CulturaX or HPLT, while being substantially larger) and, in some cases, even performs better than some datasets specifically curated for a single one of these languages, in our diverse set of carefully selected evaluation tasks: FineTasks.