Surprise Me!

Tech Companies Take Steps To Protect Voters From AI-Generated Misinformation

2023-11-09 14 Dailymotion

Tech Companies Take Steps , To Protect Voters From , AI-Generated Misinformation .<br />On November 8, Meta announced that it will <br />require political ads that have been digitally <br />atered, using AI or other technology, to be labeled. .<br />On November 8, Meta announced that it will <br />require political ads that have been digitally <br />atered, using AI or other technology, to be labeled. .<br />'Time' reports that Meta's announcement <br />comes one day after Microsoft revealed <br />steps it will take to protect elections.<br />Microsoft announced tools to add watermarks to AI-generated content and a "Campaign Success Team" which will offer campaigns advice on AI and security.<br />The advent of generative AI, which allows <br />users to create text, audio and video content, <br />comes ahead of a busy global election year. .<br />The advent of generative AI, which allows <br />users to create text, audio and video content, <br />comes ahead of a busy global election year. .<br />2024 will see major elections decided in <br />the United States, India, the United Kingdom, <br />Mexico, Taiwan and Indonesia. .<br />According to a November poll, 58% of adults in <br />the U.S. are concerned that AI could be used to <br />spread false information in the upcoming election.<br />Elizabeth Seger, a researcher at the Center for <br />the Governance of AI, warns that AI could be <br />used to conduct mass persuasion campaigns. .<br />Seger also warns that just knowing <br />deepfakes exist could erode people's <br />trust in information sources.<br />A risk that is often overlooked, <br />that is much more likely to take <br />place this election cycle, isn't <br />that generative AI will be used to <br />produce deepfakes that trick people <br />into thinking candidate so and <br />so did some terrible thing. , Elizabeth Seger, Researcher at the Center <br />for the Governance of AI, via 'Time'.<br />But that the very existence of these <br />technologies are used to undermine <br />the value of evidence and undermine <br />trust in key information streams, Elizabeth Seger, Researcher at the Center <br />for the Governance of AI, via 'Time'

Buy Now on CodeCanyon