Tomcat
Professional
- Messages
- 2,689
- Reaction score
- 951
- Points
- 113
Google's new rules for developers are aimed at combating unwanted content.
Google encourages third-party developers of Android apps to use generative artificial intelligence responsibly. The initiative is aimed at combating problematic content, including explicit materials and hate speech, created using generative AI.
In a new guide, Google encourages developers of apps that use generative AI:
Mechanism for marking unwanted content
A distinctive feature of the new guide is the emphasis on user protection. Google emphasizes the importance of creating a safe environment for users and the need for a responsible approach to developing and testing generative AI features. This approach should become a standard for all developers to prevent possible misuse of artificial intelligence technologies.
Google encourages third-party developers of Android apps to use generative artificial intelligence responsibly. The initiative is aimed at combating problematic content, including explicit materials and hate speech, created using generative AI.
In a new guide, Google encourages developers of apps that use generative AI:
- prevent creation of restricted content;
- implement a mechanism for reporting or flagging offensive information;
- accurately represent the capabilities of your apps in your marketing materials;
- test models thoroughly in various scenarios;
- prevent requests that might lead to the creation of malicious or offensive content.

Mechanism for marking unwanted content
A distinctive feature of the new guide is the emphasis on user protection. Google emphasizes the importance of creating a safe environment for users and the need for a responsible approach to developing and testing generative AI features. This approach should become a standard for all developers to prevent possible misuse of artificial intelligence technologies.