Thanks to the continual feedback loop of social media, the name ‘Momo’ is probably as familiar to you as your own pet’s name. This viral ‘game’ has spread through the news like wildfire; terrifying parents, teachers and school governors alike. All of whom, have become as wide-eyed as the unsettling character itself.
The key demographic of ‘terrified children’ though, is largely unrepresented when it comes to the ‘Momo Challenge’, a key point to this story we’ll touch on shortly. Firstly though, for those of you who have managed to avoid the tabloid hysteria, here’s a brief summary of ‘Momo’.
It’s reported that children are being added on WhatsApp by a user named ‘Momo’, whose profile picture is that of a sculpture titled ‘Mother Bird’, created by Keisuke Aisawa for the Japanese special effects company ‘Link Factory’. After being added, children are purported to be encouraged to commit violent acts either to themselves, or others, in order to ‘win’ the game.
Aside from the immediate shock of Momo’s unsettling appearance, it isn’t the safeguarding nightmare the media has made it out to be.
In fact, the constant reporting of this story has thrust Momo into the spotlight far more than the creators ever could have hoped for. News headlines that stir panic amongst parents increases readership numbers, becoming a perpetually bigger story the more it’s reported on. Ultimately, this over-reporting, negatively affects the very individuals they’re attempting to protect – children. Harmful Content Manager for the UK Safer Internet Center; Kat Tremlett, stated that: “Even though it’s done with best intentions, publicising this issue has only piqued curiosity among young people…It’s a myth that [has been] perpetuated into being some kind of reality.”
To contextualise, there has been no reported injuries resulting from the game, and parents have been urged by numerous charities – such as the NSPCC – to extract themselves from the current hysteria.
Still not convinced?
Well, have you heard of the ‘Blue Whale Challenge’? Of course you haven’t. But it was very much the Momo of 2016. A challenge whereby children were added by the ‘Blue Whale’, and subsequently encouraged to commit violent acts either to themselves, or others. The only difference between The Blue Whale & Momo, is the latter has been highly publicised, and “starting a panic means vulnerable people get to know about it, and that creates a risk”, say the Samaritans.
The issue with viral hoaxes such as these, is they act as red herrings, distracting away from immediate and damaging safeguarding issues. The monstrous face of Momo should simply be used as a reminder. A reminder that it’s easy to protect and filter initially shocking or explicit material. In reality though, it’s often not the obvious threats that compromise the online safety of children. Identifying the real, and often unseen dangers, is a key aspect of online filtering software such as Studysafe.
The subtle nuances of filtering content can often be overlooked, as companies frequently capitalise on moral panic to promote their product.
“Worried about Momo? Our software blocks it! Buy now!”
This isn’t a bad thing, but it implies that filtering clearly harmful content is a unique selling point. It isn’t. It should be a basic expectation of your web-filtering software. Any web-filtering software for children on the market should be able to identify clearly damaging content for minors – whether it’s pornographic content, or sites encouraging children to commit violent acts.
It’s imperative then, to be able to have appropriate discussions with children, if they’re specifically searching for troubling content. This is an important distinction to make, when compared to a child simply stumbling across viral hoaxes such as Momo.
Often, damaging content slips through the net due to children searching for terms on large corporate websites. Websites like YouTube for example, are a great resource for both learning and entertainment. The user-generated content is what makes it so brilliant, but can obviously be problematic when it comes to filtering unsuitable videos. Children are able to search for content, that would otherwise have been blocked through search engine settings, or basic web-filtering software.
No web-filtering software on the market is perfect, but Studysafe has unique features that allow teachers to largely curtail the pitfalls of sites such as YouTube.
Using Studysafe, teachers have the ability to easily create alerts, for when pupils attempt to access sensitive material online. The benefits of which being two-fold. The initial positive is of course that if a child is searching for anything obviously inappropriate, a teacher can deal with it accordingly. The second aspect however, is pastoral care.
Often school can act as a refuge for many pupils who may be struggling at home, and their searching habits may be reflective of that. Our monitoring feature allows for teachers to search keywords throughout the traffic log, even creating alerts for when blocked sites have had access requests, or terms have been searched for. Allowing to search for terms such as ‘self-harm’, or ‘caring at home’ for example, gives schools the ability to address any safeguarding issues before they escalate, ensuring teachers are able to offer support and guidance to the most vulnerable pupils.