Creating an AI-powered
comments moderation platform

Transforming Storytelling with

AI-Generated Content

Coeditor

We created an AI powered comments moderation platform for some of the UK's leading magazines.

Solution:

Product Design, CSS

Product Design, CSS

Role:

Product Designer

The Challenges

News and politics platforms often have comments sections filled with bigotry and behaviour that don’t align with community guidelines. The challenge was to create a system that can exist as a plug‑in for different magazines, while still giving teams fine‑grained control over the parameters of AI‑moderated analysis and feedback.⁠

The Solutions

We created the comments moderation platform in response to this. It was built to analyse comments against a set of parameters and give contextual feedback to the user on issues with their comment, relative to the publisher’s community guidelines. The goal was to keep conversations safe and constructive without feeling heavy‑handed or robotic.

The Process

We followed a classic UX process: mapping out user flows, running workshops and critique sessions, then turning insights into wireframes and high‑fidelity designs in Figma. I also contributed CSS snippets to give engineers clear direction on key animations and motion.

CSS Animations

We were moving quickly, and I wanted a loading state that felt unique but was still feasible to ship on deadline, with me as the only designer on the project. I started from an open‑source loader and built on top of it to create a small visual system for the loading states.

The Loader

I began by editing the styles, turning the orb from dark mode to light mode to match the visual direction of the MVP’s light‑mode widget. This gave us a distinct, friendly loading pattern without starting from scratch.

The Loader States in Context

The idea for the loader was that it would move between states using colour as a visual signal: a classic set of colours for approvals and acceptances, and a pink‑and‑blue colourway for the primary loading state. This made it easy for users to read system status at a glance.

Tradeoffs and Direction Change

The team was excited about the loader as a visual direction, but we realised it wouldn’t work in all the contexts it needed to. The plugin needed to sit comfortably alongside the brand colours of any publisher using it, and the colourful orb risked clashing with existing visual systems.

New Loader Animation

We shifted direction to something more neutral and flexible: a simple open‑source search icon, with colours edited to better adapt to different brand environments. The animation and design felt more context‑agnostic—something that could exist as a plugin multiple publishers could use without worrying whether it aligned perfectly with their visual language.

Success State

The success state was intentionally simple: once a comment was approved, it was posted and the user received a clear message confirming that their comment had been successfully published. No confetti, just confidence.

The Most Challenging Piece of UX Writing

This was the most challenging piece of UX writing in my career. It raised a deceptively simple question:


How do we give AI‑based feedback that is both contextual and non‑deterministic, while still aligning with brand tone and community guidelines—without feeling philosophically imposing?


The result was a set of UX writing guidelines and example copy used to train the system’s output, creating a consistent, empathetic tone of voice that could flex across different publishers.

The Dashboard UI

I then designed the moderation dashboard, the platform view where moderators see comments that haven’t passed the AI check and need human review. From here, moderators can approve, reject, or take further action, with enough context to make decisions quickly and confidently.

The Modal

Designing the moderation modal was my favourite part of this project.

The component had to communicate a lot of information about each comment to support good decisions: the article, the comment itself, user details, and the comment’s history. The modal gives moderators a compact, high‑signal view of everything they need to decide whether to approve, escalate, or decline a comment—without drowning them in noise.

The Results

We delivered an MVP of the comments moderation platform that could generate on‑brand, non‑deterministic feedback that felt intelligent and contextual. The system was ready to plug and play for publishing teams, helping moderation teams move faster while giving them the context they need to keep comment sections safe and free of violence and harm.

Crafted with ❤️ by Austin Skhosana

Crafted with ❤️ by Austin Skhosana

Crafted with ❤️ by Austin Skhosana