A lot of completely safe for work artwork, photos, and images are getting wrongly flagged as NSFW. I've complained about this issue before, and I appreciate that now Discord actually tells you why things fail to upload (being because of the NSFW bot filter, compared to before where it just failed to upload with no explanation whatsoever), but the false positives are still an extremely annoying and unfixable issue. I run a community server so there's nothing I can do to disable this even if I wanted to, since it's a SFW server, but this is getting in the way of server activities.
I have a feature request that I think would be a better workaround than just sending us off to Discord Support to report it try and "retrain" the AI. What if anything that got flagged by the bots as NSFW would get passed on to server moderators to approve or disapprove, or even clearly mark as NSFW vs SFW? As it is currently, server admins essentially have no choice but to bow to the bot's decisions, regardless of whether it's right or not. Approval would at least give us a choice in the matter. And giving the option of letting human moderators decide whether something counts as not NSFW could possibly help retrain the bot, though I'd understand why you wouldn't want to let random people do it like captchas. Just some food for thought.