More Tech Experts , Issue Warning About Possible Threat , of Artificial Intelligence.<br />NBC reports that leading experts in artificial <br />intelligence have released a statement warning that <br />the technology could lead to the end of the human race.<br />Mitigating the risk of extinction from <br />AI should be a global priority alongside <br />other societal-scale risks such as <br />pandemics and nuclear war, Center for AI Safety statement, via NBC.<br />On May 30, the warning was issued via a statement posted <br />on the website of the Center for AI Safety, a nonprofit <br />organization headquartered in San Francisco.<br />The statement was signed by nearly 400 people <br />including AI executives from Google and <br />Microsoft, as well as Sam Altman, CEO of OpenAI. .<br />The statement was signed by nearly 400 people <br />including AI executives from Google and <br />Microsoft, as well as Sam Altman, CEO of OpenAI. .<br />The statement, which was also signed by 200 academics, comes as the most recent warning to be <br />issued by experts in AI technology.<br />The statement, which was also signed by 200 academics, comes as the most recent warning to be <br />issued by experts in AI technology.<br />NBC reports that just two months ago, a different group <br />of tech leaders including Apple co-founder Steve Wozniak, <br />and Tesla CEO Elon Musk issued a similar warning.<br />NBC reports that just two months ago, a different group <br />of tech leaders including Apple co-founder Steve Wozniak, <br />and Tesla CEO Elon Musk issued a similar warning.<br />The warning came in the form of <br />a petition calling for a "pause" on large-scale <br />AI research that has yet to be heeded.<br />NBC reports that despite announcing plans to address AI, <br />the White House has yet to show signs of an imminent <br />plan for large-scale regulation of the emerging industry.<br />NBC reports that despite announcing plans to address AI, <br />the White House has yet to show signs of an imminent <br />plan for large-scale regulation of the emerging industry.<br />However, other experts see the warnings <br />as overhyping and overpromising <br />on the theoretical capabilities of AI. .<br />Literal extinction is just one <br />possible risk, not yet well-understood, <br />and there are many other risks <br />from AI that also deserve attention, Gary Marcus, AI critic and a professor<br />emeritus of psychology and neural science <br />at New York University, via NBC