• sucrerey@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    weird question: if this worked, couldnt the same dataset be used to create a very skillful AI cybergroomer chatbot if it fell into the wrong hands?

    • simple@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      If this is implemented right it should flag accounts so human reviewers can follow up on it, not take action on its own.

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    I guess most people don’t get how terrifyingly dystopian this is.

    In the EU, there is a serious push to make this mandatory.

  • devfuuu@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 days ago

    Get ready when the ai bots start behaving like chidren to bait and create a relationship with people.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      This has already happened. There was a news article about a police force who used AI to bait groomers. This is further automation in something that’s already being done.