Written 21 December, 2020 //
I wanted to revolutionise moderation within Discord servers using the power of Machine Learning and Natural Language Processing - here’s where I failed, and what I learned.
To properly explain my ambitions with Brandonn, I should explain one of my largest complaints with modern Discord bots: their methods of interaction. Currently, the only two options for interaction are to type your exact command to the bot as a regular message in Discord, but use a dedicated prefix at the beginning on the message, or to go to a bot’s dedicated web client site where one can interact with some of the bot’s functions in a graphical interface.
Neither of these solutions is particularly desirable or achievable for smaller bot developers like myself. Interacting through a way that closely resembles command line is clunky and counter-intuitive, especially on mobile devices with autocorrect, and an entire web-based client is unachievable with a small team just starting out in development.
My ultimate solution to that issue would be to develop an algorithm or tool that could dissect a message sent to pick out the needed information and commands, but allow users to structure the message as if they were sending it to any other person in their server.
This solution would solve, or at the least lessen, a number of gaping issues in how bot interaction currently functions:
- As I mentioned above, autocorrect on mobile devices are notoriously bad at correcting a sentence to remove the syntax a bot requires to function properly, essentially rendering them unusable on Discord’s mobile client. Additionally, ‘prefix characters’ such as exclamation marks are more difficult to access on mobile keyboards.
- Often when enough bots are active within a server, an overlap between their prefix characters can occur, which can result in incorrect summons, especially relating to the classic ‘help’ command detailing a bot’s function.
These issues played an integral role in the development of the message processing system that drove the development of Brandonn initially, and I structured a significant amount of the design philosophy around essentially ‘humanising’ Discord bots for a better interaction model.
I’d started developing a rough version of this system when I rewrote Morgann around April 2020, and in doing so discovered a Pythonic machine learning library specifically built for interpreting English. The Natural Language Toolkit (or NLTK) is a free, open-source library that contains a number of models for dissecting language, and quickly became an integral tool in the development of Morgann and Brandonn.
As I explored the library and its capabilities, I discovered a function called Sentiment Analysis, and to say it caught my eye would be an understatement. Sentiment Analysers are machine learning models used to parse a piece of text in an attempt to understand the author’s mood or emotion.
In practice, sites such as Sentdex have used this to index the sentiment of sites such as Twitter on polarising topics, and in theory analysis like this could be used to develop a profile of any member of a Discord server to understand their mood, then automatically make moderation decisions based on that information. For example, a toxic user would be demoted, warned or even removed from the server for consistent toxic behaviour, and a positive member would be promoted and rewarded for their efforts.
Automatic moderation is nothing new in bots - Dyno (a popular moderation bot) can warn or remove members if they use specific trigger words or phrases, but they could never dissect the sentence and attempt to understand the deeper or subtle meaning in sentences.
There were a number of other features I had planned to develop for Brandonn that got to varying stages of development before I discontinued him, but they are unimportant and thus I don’t think it’s worth sharing them.
Around September this year, I decided to essentially discontinue the development of Brandonn for a few reasons.
- Personal life: coming up to my final Year 12 exams meant that I had to devote a huge amount of time to study to achieve the results I wanted to, and the chaos that had been 2020 was beginning to take its toll on my mental capacity.
- Functional failure: the sentiment analysis feature more or less failed after I demonstrated it to a few personal friends, only to be met with concerns over their privacy.
- Project fatigue: working on Brandonn for 5 months began to fatigue me severely, to the point where I had no motivation to continue developing him. This had more or less solidified my hypothesis that working on large projects for months on end was just not enjoyable for me anymore.
The technology I developed to construct Brandonn has been repurposed, despite his discontinuing. The message processing system I developed is now known as Rosetta, and the first version is at work in the Morgann 2021 Rewrite while I work on the second version.
Eventually, I’d like to release Rosetta to other Pythonic bot developers to use, as I believe it to be a solid step towards semantic bot interaction, but that may be some time off yet.
The sentiment analysis technology will remain unused in my future projects at this stage, however. I realised that, on top of potentially infringing on people’s perception of privacy, the idea of denoting a member’s entire worth within a server to their positivity or toxicity, especially as a single number, is troublesome; it promotes a two-dimensional view of other people I personally don’t support, so I didn’t want to promote the idea to other people, especially those in moderative positions.
I consider Brandonn less of a ‘failure’ and more of an important learning experience for my development life, technical ability and general outlook on other people.
Brandonn is no longer online or available to add to servers, but if you’re interested in other things I’ve done with the technology, I’ll be releasing updates regarding Morgann and Rosetta as I work on them.