With Donald Trump on the brink of re-entering the White House, his policy priorities are set to encompass the oversight of artificial intelligence (AI), a technology that could be the most influential of our era. The President-elect has pledged to "reduce unnecessary regulations" and has enlisted the help of tech entrepreneur Elon Musk, who is also known for his skepticism towards government regulations, to spearhead this initiative. Specifically, the Republican Party's election platform has indicated its intention to rescind an extensive executive order issued by President Joe Biden. This order outlined measures to mitigate AI's national security threats and to prevent discrimination by AI systems, among other objectives.
The Republican platform labeled the executive order as containing "radical leftist ideas" that impede innovation. Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute at Oxford University, is closely monitoring the unfolding situation. She asserts that AI is fraught with risks that "should have been addressed long ago" through robust regulation. Here are some of the perils associated with unregulated AI.
For many years, AI systems have shown their propensity to mirror societal prejudices—concerning race and gender, for instance—because they are trained on data from past human actions, many of which are tainted by these biases. When AI is employed to determine hiring decisions or mortgage approvals, the outcomes are often discriminatory. "Bias is inherent in these technologies because they examine historical data to forecast the future… they learn who has been hired in the past, who has been incarcerated in the past," Wachter explained. "And so, very often and almost always, those decisions are biased." Without solid safeguards, she added, "those problematic decisions of the past will be carried into the future."
The application of AI in predictive law enforcement serves as an example, according to Andrew Strait, an associate director at the Ada Lovelace Institute, a London-based non-profit organization dedicated to researching AI safety and ethics. Some U.S. police departments have utilized AI-driven software, trained on historical crime data, to anticipate where future crimes are likely to occur, he noted. Since this data often reflects the over-policing of certain communities, Strait said, the predictions based on it lead police to concentrate their efforts on those same communities and report more crimes there. Meanwhile, other areas with potentially the same or higher levels of crime are policed less.
AI is capable of generating deceptive images, audio, and videos that can be employed to make it appear as though an individual did or said something they did not. This, in turn, could be used to influence elections or create fake pornographic images to harass individuals, among other potential misuses. AI-generated images circulated widely on social media in the lead-up to the recent U.S. presidential election, including counterfeit images of Kamala Harris, which Musk himself re-posted. In May, the U.S. Department of Homeland Security stated in a bulletin distributed to state and local officials, and seen by the press, that AI would likely offer foreign operatives and domestic extremists "enhanced opportunities for interference" during the election.
Furthermore, in January, over 20,000 individuals in New Hampshire received a robocall—an automated message played over the phone—that used AI to impersonate Biden's voice advising them against voting in the presidential primary race. Steve Kramer, who admitted to being behind the robocalls, worked for the longshot Democratic primary campaign of Rep. Dean Phillips against Biden. Phillips' campaign denied any involvement in the robocalls. In the past year, targets of AI-generated, nonconsensual pornographic images have ranged from prominent women like Taylor Swift and Rep. Alexandria Ocasio-Cortez to high school girls.
Dangerous Misuse and Existential Risk
AI researchers and industry insiders have highlighted even more significant risks posed by the technology. They range from ChatGPT providing easy access to comprehensive information on how to commit crimes, such as exporting weapons to sanctioned countries, to AI breaking free from human control. "You can use AI to construct very sophisticated cyber attacks, you can automate hacking, you can actually create an autonomous weapon system that can cause harm to the world," Manoj Chaudhary, chief technology officer at Jitterbit, a U.S. software company, stated. In March, a report commissioned by the U.S. State Department warned of "catastrophic" national security risks presented by rapidly evolving AI, calling for "emergency" regulatory safeguards alongside other measures. The most advanced AI systems could, in the worst case, "pose an extinction-level threat to the human species," the report said. A related document said AI systems could be used to execute "high-impact cyberattacks capable of crippling critical infrastructure," among a litany of risks. In addition to Biden's executive order, his administration also secured pledges from 15 leading tech companies last year to enhance the safety of their AI systems, though all commitments are voluntary.
And Democrat-led states like Colorado and New York have passed their own AI laws. In New York, for example, any company using AI to assist in recruiting workers must engage an independent auditor to verify that the system is free from bias. A "patchwork of (U.S. AI regulation) is emerging, but it's very fragmented and not very comprehensive," said Strait at the Ada Lovelace Institute. It's "too soon to be certain" whether the incoming Trump administration will expand those rules or roll them back, he noted. However, he is concerned that a repeal of Biden's executive order would signal the end of the U.S. government's AI Safety Institute. The order established that "incredibly important institution," Strait told the press, assigning it the task of scrutinizing risks emerging from cutting-edge AI models before they are released to the public.
It's possible that Musk will advocate for stricter regulation of AI, as he has done previously. He is set to play a prominent role in the next administration as the co-lead of a new "Department of Government Efficiency," or DOGE. Musk has repeatedly expressed his concern that AI poses an existential threat to humanity, even though one of his companies, xAI, is itself developing a generative AI chatbot. Musk was "a very big proponent" of a now-scrapped bill in California, Strait noted. The bill was aimed at preventing some of the most catastrophic consequences of AI, such as those from systems with the potential to become uncontrollable. Gavin Newsom, the Democratic governor of California, vetoed the bill in September, citing the threat it posed to innovation. Musk is "very concerned about (the) catastrophic risk of AI.
It is possible that that would be the subject of a future Trump executive order," said Strait. However, Trump's inner circle is not limited to Musk and includes JD Vance. The incoming vice-president stated in July that he was worried about "pre-emptive overregulation attempts" in AI, as they would "entrench the tech incumbents that we already have and make it actually harder for new entrants to create the innovation that's going to power the next generation of American growth." Musk's Tesla (TSLA) can be described as one of those tech incumbents. Last year Musk dazzled investors with talk of Tesla's investment in AI, and in its latest earnings release, the company said it remained focused on "making critical investments in AI projects" among other priorities.
By Sarah Davis/Dec 5, 2024
By Emily Johnson/Dec 5, 2024
By Noah Bell/Dec 5, 2024
By Sarah Davis/Dec 5, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024
By iuno001/Nov 29, 2024