OpenAI has declared that it will not permit the use of its AI for political campaigning and will strive to prevent deceptive ‘deepfakes’ and chatbots mimicking candidates, particularly in the lead-up to the 2024 elections across major democracies. The Sam Altman-led organization outlined policy changes to ensure its generative AI technologies like ChatGPT and DALL-E don’t compromise the democratic process. Emphasizing collaboration and transparency, OpenAI is actively working to anticipate and prevent potential misuse, including misleading deepfakes, scaled influence operations, or chatbots posing as candidates.
The company outlined safety measures, including red teaming new systems, engaging users and external partners for feedback, and incorporating safety mitigations to minimize harm. Specific precautions for DALL-E involve guardrails to decline requests for generating images of real people, especially candidates. OpenAI is cautious about personalized persuasion and restricts the development of applications for political campaigning and lobbying. It also prohibits the creation of chatbots pretending to be real people or institutions and applications that discourage participation in democratic processes.
To enhance transparency, OpenAI is experimenting with a provenance classifier for DALL-E-generated images, allowing users to detect which tools were used to produce an image. Additionally, ChatGPT is integrating with real-time news reporting globally, offering users access to balanced news sources for better information assessment. The company looks forward to collaborating with partners to prevent potential tool abuse during the upcoming global elections.
Post Your Comments