OSAIBI
Project Partners/Funders
DCU/ABC and ADAPT SFI via Elite-S Fellowship (Marie Skłodowska-Curie COFUND Action)
Fund Award Amount
€116,640
Project News
Project Overview
OSAIBI (Open Standards for AI-based Bullying Interventions) aims to create standards for proactive anti-bullying interventions on social media platforms by soliciting children’s feedback as to the effectiveness of these interventions; and as to their impact on children’s rights to safety, privacy and freedom of expression.
Bullying and harassment on social media platforms are difficult to define and operationalise and there is little agreement within the academic community regarding these phenomena. It is particularly challenging for social media platforms to design effective reactive let alone proactive moderation of such behaviours. Proactive moderation, which involves the application of various content analysis tools, can prove very difficult where it involves the automatic detection of bullying and harassment. These behaviours are not always overt and are often ambiguous, containing irony or sarcasm. Effective automatic detection of bullying and harassment requires processing not only subtle and often ambiguous linguistic context, but perhaps most importantly, external pragmatics including cultural context. While there is a significant amount of research on algorithm optimisation both from inside and outside of the industry, it is not yet a common practice to solicit and incorporate children’s feedback as to the functioning of these models. Ensuring children’s feedback and opinions on matters than concern them is a provision of the United Nations Convention on the Rights of the Child (UNCRC), and this principle directly applies to co-design and co-creation of technologies that have implications for children’s wellbeing. Building on applicant’s existing project with Facebook, OSAIBI will:
- Map social media companies’ proactive responses to bullying that rely on natural language processing (NLP), machine learning and artificial intelligence.
- Leverage qualitative research with children to examine how effective these proactive tools are from the perspective of children who have experienced bullying.
- Develop a typology as precursor to ontology based automatic classification of bullying cases that take place on popular social media platforms and that can be particularly difficult to detect by proactive tools.
- Leverage quantitative study with children to examine how they perceive their rights to privacy and freedom of expression in the context of proactive monitoring. OSAIBI will therefore facilitate transparent social media design that supports children’s wellbeing and embed children’s voices into this design in order to ensure that the proposed solutions are effective from children’s perspective.
Project Goals
OSAIBI will:
- Map social media companies’ proactive responses to bullying that rely on natural language processing (NLP), machine learning and artificial intelligence.
- Leverage qualitative research with children to examine how effective these proactive tools are from the perspective of children who have experienced bullying.
- Develop a typology as precursor to ontology based automatic classification of bullying cases that take place on popular social media platforms and that can be particularly difficult to detect by proactive tools.
- Leverage quantitative study with children to examine how they perceive their rights to privacy and freedom of expression in the context of proactive
Research Areas
Publications
Milosevic, T., Verma, K., Davis, B., Laffan, D., O’Higgins-Norman, J., Walshe, R. Developing AI-Based Cyberbullying Interventions on Online Platforms: Standardizing Children’s Rights. Paper presented at the 11th International Conference on Standardisation and Innovation in Information Technology (SIIT). Online.