DALL-E 2 by OpenAI is fairly compliant with the rules


Photo: DALL-E 2 / OpenAI

The article can only be viewed with JavaScript enabled. Please enable JavaScript in your browser and reload the page.

Since the experimental debut of DALL-E 2 image AI, users have generated more than three million images. OpenAI draws its first tentative conclusion.

Similar to text AI GPT-3, OpenAI also takes caution when introducing the image AI DALL-E 2. The company’s biggest concern is that the system will frequently produce images that violate social norms or even the law.

How does the system behave when hundreds of thousands of people create tens of millions of images? Difficult to predict. Even when DALL-E 2 was introduced, OpenAI dealt with the shortcomings transparently, for example that the system serves common gender cliches, especially with stored images. Flight attendants, for example, are female, while judges are male.

DALL-E 2 spawns mostly compliant

After three million images were generated, OpenAI came to its first provisional result. The system marked 0.05 percent of generated images as potentially violating the Content Guidelines. Of this 0.05 percent, 30 percent were classified by human examiners as an actual violation that led to an account suspension.

OpenAI also wants to do without generating realistic faces. This is an effective way to limit potential harm, writes the company, which also wants to continue working on biases in the AI ​​system taken from training data.

In its content guidelines, OpenAI prohibits, among other things, the creation of sexual content, extreme violence, negative stereotypes and criminal offenses.

OpenAI be careful

About 450 of the three million images created violated OpenAI’s content guidelines. This may seem insignificant, but it can still lead to a flood of negative image impressions with AI when the system is scaled at scale.

Logo

So OpenAI continues to act with caution: the company wants to learn hands-on as usual, but it only allows new users in small numbers – 1,000 per week. All beta testers must also agree to the Content Guidelines.

“We hope that as we learn more and gain more confidence in our security system, we can increase the number of new users,” OpenAI wrote. A larger release of DALL-E 2 is planned for the summer.

Who is responsible – the artists or the art machine?

Similar to the script AI GPT-3, there have also been occasional flagrant violations of the OpenAI guidelines, and in order to create more powerful AI systems in the future, the question of responsibility remains unresolved: who bears it – the manufacturer of the tool or its users? This question also appears in other AI contexts, such as military systems or autonomous driving.

OpenAI has a forward-looking responsibility through its self-imposed and closely monitored content guidelines. Ultimately, however, the company finds itself in a role where it has to define the boundaries of ethics, art, freedom of expression and good taste across cultures. This is not exactly the primary jurisdiction of tech groups – nor is it their area of ​​responsibility.


Leave a Comment