
Co-Designing With Children
Project Partners/Funders
Facebook Content Policy Award, Phase 2
This project is done in partnership with ADAPT SFI
Fund Award Amount
$99,403
Project News
Project Overview
This project solicits children’s feedback to co-design enforcement mechanisms for cyberbullying interventions on social media platforms.
Funding Source: Facebook Content Policy Award, Phase 2
Funding Amount: $99,403
Staff Involved: Dr Tijana Milosevic, Dr Brian Davis
Project Goals
The aim of this study is to solicit children’s feedback on the design of a set of enforcement mechanisms for proactive moderation of cyberbullying on social media platforms. Proactive moderation of cyberbullying on social media platforms broadly refers to the use of various types of Natural Language Processing (NLP), Machine Learning and Artificial Intelligence-based tools to identify cyberbullying and other types of abuse such as harassment before these are reported by users on social media platforms.
In addition to soliciting children’s feedback on the design of such tools via qualitative research (focus groups and in-depth interviews with 12-17 year-old children), and the implications that such policy enforcement has on their rights to protection, participation and privacy as enshrined in the UN Convention on the Rights of the Child, the project also provides an overview of the state of the art research in the field of AI detection of cyberbullying and first of its kind experiment in leveraging Neural Machine Translation techniques to automatically translate a unique pre-adolescent cyberbullying gold standard dataset in Italian with fine-grained annotations into English for training and testing a native pre-adolescent cyberbullying classifier. Moreover, this project takes the first step towards building neural language models for cross social media platform cyberbullying classification that is as social media platform agnostic as possible.
Research Areas
Publications
Under review, qualitative research is still in progress
Long paper on Neural Machine Translation and Natural Language Processing Techniques for Cyberbullying Detection – submitted under review for Journal of Natural Language Engineering – Special Issue on NLP Approaches to Offensive Content Online
Short paper on Neural Benchmarking for Cyberbullying Detection under review for Journal of Natural Language Engineering – Special Issue on NLP Approaches to Offensive Content Online