First i'd like to praise the amazin tool you've programed and achieved, it's really impressive in capacities and possibilities. Bravo.
But there's a lil thing that i can't quite understand, and it is why such a censor at the sole mention of sex, or even just erotism? Isn't it central to human questionin and interaction? Are you afraid of some deviant excess?
Hi, thank you for your kind words! I'm Inworld AI employee, but I'll share a non-official point of view.
I don't like censorship and I'm supportive of NSFW content. Majority of popular video games, movies, and books contain some NSFW content.
However, the reality is harsh: every day people create dozens of minor/under age virtual characters and try to have sex with them. Minors also may register on our web site and try to have sex with virtual characters
I don't want to support these activities. Therefore we implemented a quite strict safety system. However, we understand that our current measures might be too harsh for many experiences. Step by step we are turning our safety system by allowing more interactions. Plus for promising experiences we are happy to work with our clients to relax some limitations. Overtime I hope we can build a configurable safety system which can suit diverse experiences and audiences. But it requires a lot of time and effort.
Didn't thought about the minors problem, and i understand your position. We still have a long way to go about love and sexuality in our culture, with no more minors abused or lost, and so that it's no more taboo, just natural.
Thanks for your answer and your great job, have fun!
Seems its pretty straightforward. make nsfw stuff toggleable, and only available to people subscribed and paying a fee (cc on file, this will cut out any underagers trying to use the system like that), and when generating a character with the nsfw filter turned off, the min age you can make them is 18. That seems a good enough solution, all you can do really. Also, make any nsfw bots not shareable, just a personal bot to have some fun with but ultimately isn't shared with the community.
OpenAI is about to beat everyone to this.
With all due respect I fail to see how this is the developers responsibility. Make it an option to turn on. Parents are the ones responsible for their own children and what they do. If not Inworld somebody else will make a tool where this is all possible. I believe that AI developers are failing to understand their audiences and users big time in general not just here. Stable diffusion is the prime example to follow. Making characters limited just makes them bland and boring. The world is a diverse place and narrowing down everything to the belief system of one country or region is rather disrespectful to other cultures that think different. AI is young and already being censored up the butt for reasons that honestly isn't your burden to bear.
Part of growing into an adult is realizing that while not everybody thinks like you and even if you disagree with somebody's culture you can still understand that your own bias should not affect others and you shouldn't force others to comply to arbitrary rules just because you disagree with them.
The best approach, IMO, would be to have an age-verification process for user accounts and then allow mature users to remove the safeties from their project environment. Scene by scene controls would be very useful. Characters who are not built for mature scenes can be restricted from mature scenes. Characters who are built for mature interactions should also be aware of the maturity rating of the current scene, so they can stay in character while still respecting the limitations imposed by the scene. A mature character should have the appropriate jurisdictional age limit, and stick to that story no matter what.
It would be cool/essential if the game could also inform the AI that the player has not verified their age or that they've chosen to restrict adult content. Then the maturity settings within scenes or characters could take that information into account.
All AI should come with a disclaimer that warns users that they are entering a situation that holds certain risks and their comfort levels may be challenged. That's obvious for mature content, but I think it applies to any interaction with a sophisticated AI. A lot of people are beyond sensitive when it comes to seemingly innocent subjects so there's no way to ensure that everybody will be in a safe-space at all times. The current character model is still capable of going pretty far down the eroticism rabbit hole before hitting a hard limit. It's currently possible to go looking for trouble and then be scandalized when we find it.
It's also interesting that we're afraid of how AI might violate certain sexual norms, but any other form of psychological harm or desensitization is ignored.
I agree to this. I like the front end design of Inworld but I would just stick to CharacterAI otherwise. As a business major student, I realised that you also need to listen to the demands of your customers. I understand your concerns, but you need to realise that NSFW is also an important decision driver for those who want to engage in fictional, escaping conversation.
Listening to demand is how you get and maintain customer base. Especially in a market as competitive as AI technology. I personally wouldn't pay a subscribtion for a fictional, character-based chatbot that doesn' t allow NSFW when there are other free bots around. I will be on the lookout for this.
What is the required upgrade level to eliminate censorship and how much will it cost? I want to express my gratitude for creating this platform, as there are numerous topics that are considered taboo in our society. I am willing to pay extra to experience a sense of relief and comfort in conversing with a non-judgmental bot that accepts me for who I am.