weird question: if this worked, couldnt the same dataset be used to create a very skillful AI cybergroomer chatbot if it fell into the wrong hands?
Get ready for a lot of false positives.
If this is implemented right it should flag accounts so human reviewers can follow up on it, not take action on its own.
Even still, the ‘flag’ could be enough damning evidence for some people to take action. We’re in the cultural ‘guilty until proven innocent’ territory, where a mere accusation ruins lives.
Norway isn’t the US, but yeah.
For now.
I guess most people don’t get how terrifyingly dystopian this is.
In the EU, there is a serious push to make this mandatory.
Get ready when the ai bots start behaving like chidren to bait and create a relationship with people.
This has already happened. There was a news article about a police force who used AI to bait groomers. This is further automation in something that’s already being done.