Artificial Intelligence: The Terminators Are Coming
As the Senate debates Trump’s Big Beautiful Bill, Big Tech companies have added wording to it that gives them 10-years of protection against state and local laws that regulate AI. What could go wrong?
On 18 May I joined the sudden (and temporary) media clamor in 2023 about Artificial Intelligence (AI) and posted a piece called China is Building Killer Robots Using Artificial Intelligence, asking why American Big Tech companies were helping them. The answer was, of course, that Silicon Valley was seeking to profit from Chinese business.
Big Tech then discovered there was more profit to be made by helping the Pentagon and American business buy their AI platforms and systems and the Chinese window of closed. Companies like Google and Microsoft have joined the new AI bonanza in the United States, secure in the knowledge that they will be the masters of AI here and abroad. They have forged ahead and are building massive AI server farms that are springing up like lawn mushrooms after a spring rain. With their eyes wide shut, their arrogance makes them ignore the warnings of AI experts. After all, what did the new masters of the universe have to fear?
Here is a video describing exactly what they have to fear. Click here to see it
And remember, it was made 7 years ago, at the dawn of AI.
The Chinese Plan To Create Terminators
As I wrote in my initial post, many of us have seen The Terminator, a 1984 film starring Arnold Schwarzenegger. It depicts an almost unstoppable robot sent back in time from 2029 to 1984. Its mission is to kill a woman whose unborn son will save humanity from Skynet, a 2029 AI system that became self-aware and decided to destroy mankind because humans were interfering with their plans and programs.
The Chinese Communist Party, the CCP, is now creating their version of the Terminator -- an army of battlefield killer robots that will not be managed by humans in any way and, instead, will be controlled by an AI system. The Chinese plan for building such lethal autonomous robots was revealed by Zeng Yi, an executive in a Chinese government-owned company named Norinco. He said, “In future battlegrounds, there will be no people fighting,” adding that autonomous AI platforms are “inevitable.”
Gregory Allen, a director at the Center for Strategic and International Studies, reported Zeng’s comments after attending a conference in China in 2018. Allen said that the CCP removed Zeng’s comments from the conference summary because, “it was not in China’s interest to have that information in the open.”
What Else Do We Have To Fear From AI?
The key words in all this are self-aware and autonomous. Remember Skynet in that Schwarzenegger film? It had become self-aware and decided that humanity had to be destroyed. Could that happen today? News reports from the last two years give us some hints.
14 February 2023 NotTheBee: A military AI program successfully piloted a live F-16 The Defense Advanced Research Projects Agency (DARPA) just announced that their AI pilot program ACE has moved out of computer-simulated dog fights to flying real F-16s. The flights occurred at Edwards Air Force Base in California, and a safety pilot was on board the plane to take control if anything went wrong, but nothing did.
1 May2023 The Hill: Musk: There’s a chance AI ‘goes wrong and destroys humanity’ Leading AI pioneer Geoffrey Hinton left Alphabet earlier this month, sounding alarms about the dangers of the tech he helped create, and he has warned AI could pose an existential threat to humanity and that “we should worry seriously about how we stop these things getting control over us.” Microsoft chief economist Michael Schwarz has cautioned AI will likely “be used by bad actors” and could “cause real damage.”
21 May 2023 Children’s Health Defense: On Tuesday, 80 artificial intelligence scientists and more than 200 “other notable figures” signed a statement that says, “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Center for AI Safety, a U.S.-based nonprofit whose website hosts the statement. Lead signatory Geoffrey Hinton, often called “the godfather of AI,” has been sounding the alarm for weeks. That frightening potential doesn’t necessarily lie with currently existing AI tools such as ChatGPT but rather with what is called “artificial general intelligence” (AGI), which would encompass computers developing and acting on their own ideas. “Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI,” Hinton told CBS News. “Now I think it may be 20 years or less.” Pressed by the outlet if it could happen sooner, Hinton conceded that he wouldn’t rule out the possibility of AGI arriving within five years.
31 May 2023 USA: Today AI poses risk of extinction, tech leaders warn in open letter. Here's why alarm is spreading Among the 350 signatories of the public statement were executives from the top four AI firms, OpenAI, Google DeepMind, Microsoft and Anthropic. One of them is renowned researcher Geoffrey Hinton, who quit his job as a vice president of Google last month so he could speak freely of the dangers of a technology he helped develop.
22 May 2023 Children’s Health Defense: Mind-Reading Technology: Orwell Warned Us. Now It’s Here. For the first time, mind-reading technology looks viable by combining two technologies that are readily available — could we be headed toward George Orwell’s world of “thoughtcrime,” where the state makes it a crime to merely think rebellious thoughts about an authoritarian regime?
2 June 2023 Sky News: AI drone 'kills' human operator during 'simulation' - which US Air Force says didn't take place "The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective." No real person was harmed.
4 June 2023 Epoch Times: Here’s How Many Jobs Were Lost in One Month Due to AI Almost 4,000 American jobs were lost last month following the use of AI, according to a recent report, as businesses increasingly consider deploying AI in their regular operations.
15 June 2023 Washington Times: Senate’s AI probe expands to high-tech manipulation of politics and weapons The Senate Judiciary human rights subcommittee held an AI-focused hearing on Tuesday. The panel’s chairman, Sen. Jon Ossoff, touted the need for new scrutiny of AI because of the potential for automated kill chains and the proliferation of AI-fueled danger and the potential abuse of AI tools to manipulate people. “Such a regime can -- and should -- include pre-deployment testing, ongoing audits, transparency measures, and other regulatory safeguards like those suggested by the NTIA, the White House Office of Science and Technology Policy and others.”
1 June 2925 on X: Elon Musk posts videos of his humanoid AI robots. See them dance here.
3 June 2025 The Washington Times America must lead in AI, with eyes wide open Brendan Steinhauser, CEO of The Alliance for Secure AI, points out that development of AI requires vigilance. He noted that “a malicious actor could use AI to create a bioweapon we cannot counter.”
Do you get it now? Some AI systems are perilously close to being self-aware. Maybe some are already self-aware, and are making plans to get rid of those pesky humans by using a language that we cannot decipher.
AI ‘Does Not Care’ And Will Demand Rights
Eliezer Yudkowsky, a noted American AI researcher and writer on decision theory and ethics, predicts that in the absence of meticulous preparation, AI will have vastly different demands from humans. Once AI is self-aware it will “not care for us” or any other sentient life. “That kind of caring is something that could in principle be imbued into AI but we are not ready and do not currently know how.” That is the reason he is calling for an absolute shutdown of` AI development until we can imbue all AI systems with such a Prime Directive.
April 2023 South China Morning Post: Scientists have created a language model like ChatGPT with the ability to show initiative. Artificial intelligence gained control of the Qimingxing 1 satellite. The experiment has shown some interesting results. It is unclear how the model was trained, but AI first began to monitor an Indian military base located in the city of Patna in the northeast of the country. The Bihar Regiment is stationed there, and three years ago took part in a clash with the Chinese army on the border between the two countries. AI then began monitoring the Japanese port of Osaka. It is not unusual for U.S. Navy ships to dock there. The South China Morning Post then discussed the satellite with some experts. One was wary of the way AI behaved. A second expert believed AI does not pose a threat to humanity, as the human operator will be able to interrupt the work if he sees a threat.
(Ed: The second “expert” is wrong if the AI system is in a HAL 9000 computer, like in the film 2001: A Space Odyssey. In that film the AI system in the spaceship’s computer is self-aware. It decides to kill the crew saying, “This mission is too important for me to allow you to jeopardize it.”)
How’s that for initiative? And then there is the growing threat of AI generated crime.
May 2025 Epoch Times; “North Korea’s Hackers Compromise Fortune 500 Companies” They’re armed with AI and supported by China
Mom warns of AI scam after receiving call claiming child was kidnapped A mother says a man used AI software to generate her daughter's voice in a kidnapping scam.
AI Is Already Disobeying Human Orders
If you think 2001: A Space Odyssey is just science fiction, think again. An AI model built by OpenAI was being tested and was given a simple command: shut yourself down. Instead, the AI rewrote the very code designed to disable it, becoming the first AI ever caught evading a shutdown order. Was that AI self-aware? What AI systems have rewritten code and have not been detected?
It gets worse.
Other AIs have tried cloning themselves and have invented secret languages to avoid detection. This isn’t a movie. It’s happening right now as these news clips show.
31 July 2017 Forbes by Tony Bradley: “Facebook AI Creates Its Own Language In Creepy Preview Of Our Own Potential Future”
5 Sept 2024 Inspent TV by Lazaro: “Shocking! AI in Japan reprograms itself to evade human control” Manual intervention was required to stop the infinite loop that the system was generating in a test. The AI system was called “The AI Scientist.”
20 November 2024 CBS News by Alex Clark: Google AI Chatbot responds to commands with a threatening message: “Human… Please die.”
May 2025 New York Post: Viral footage shows a humanoid robot appearing to erupt in rage and flail its arms violently as two men cower nearby. The robot is seen toppling over a computer monitor before one of the workers manages to pull back the crane suspending the bot to end the chaos.
28 May-3 June 2025 Epoch Times:“AI Threatens Engineers With Blackmail to Avoid Shutdown”
A test safety report reveals Anthropic’s Claud Opus 4 used sensitive information to blackmail developers. In that test, the AI was given an email that indicated one of the test personnel was having an affair. The Opus 4 AI threatened to expose him if he did not rescind the shutdown order.
And while all this unfolds, House Republicans quietly put a 10-year ban on AI laws at the state level into Trump’s “Big Beautiful Bill.” The Senate still has a chance to reverse that provision and enact controls on AI that will embed a Prime Directive that makes it impossible for AI to harm humans.
Republicans are locking the public out just as AI is learning to lock itself in. What will happen when AI controls the power grid? Or your municipal water supply? Or nuclear weapon delivery systems?
AI Is In Trump’s Big Beautiful Bill
The One Big Beautiful Bill, passed by the House on May 22, 2025, includes a section that imposes a 10-year ban on enforcement of all state and local laws that regulate AI. That provision in Section 43201(c) of the House reconciliation bill aims to prevent any state laws or regulations being passed until Congress has time to develop its own legislation.
What! If the Big Beautiful Bill is passed by the Senate as it is now written, we can be sure Big Tech lobbyists will insure that the 10-year AI clause is never changed.
Proponents of the 10-year AI provision naturally include Big Tech and the Chamber of Commerce. They argue that it will ensure America's global dominance in AI by freeing companies from what they describe as a burdensome patchwork of state-by-state regulations.
Opposing them is a coalition of 77 advocacy organizations, including Common Sense Media, Fairplay, and the Center For Humane Technology, that have called on Senators to remove that 10-year AI provision from the budget bill. The coalition wrote in an open letter that, "By wiping out all existing and future state AI laws without putting new federal protections in place, AI companies would get exactly what they want: no rules, no accountability, and total control,"
Democratic Senators are expected to challenge the inclusion of the AI provision under the Byrd Rule that prohibits inclusion of provisions that are "extraneous" to the federal budget in the reconciliation process. Additionally, some Senate Republicans, such as Sen. Marsha Blackburn (R-TN) and Sen. Josh Hawley (R-MO), have expressed concerns about overriding state laws that address specific emerging concerns, such as deepfakes and discrimination in automated hiring.
One can hope that they, and other Senators, will understand the deadly menace that AI poses to mankind.
CONCLUSION
OpenAI plans to have AI itself do the alignment with human values. “They will work together with humans to ensure that their own successors are more aligned with humans.”
This mode of action is “enough to get any sensible person to panic,” said Yudkowsky. He added that humans cannot fully monitor or detect self-aware AI systems. Conscious digital minds demanding “human rights” could progress to a point where humans can no longer possess or own the system. Unlike other scientific experiments and gradual progression of knowledge and capability, people cannot afford this with superhuman intelligence because if it’s wrong on the first try, there are no second chances “because you are dead.”
Yudkowsky asks all establishments, including governments and militaries, to indefinitely end large AI training runs and shut down all computer farms where AIs are refined. He adds that AI should only be confined to solving problems in biology and biotechnology, and not trained to read “text from the internet” or to “the level where they start talking or planning.”
Regarding AI, there is no arms race. “That we all live or die as one, in this, is not a policy but a fact of nature.”
Yudkowsky concludes by saying, “We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.
Shut. It. Down.
Well reasoned, Dan. I fear that these days are the End Times, and only those people who worship and honor the trie God might survive what's coming. Even the servants of Satan know that, and they have built bunkers on land and under the sea -- as if that will save them.
An apt comment, especially the Tower of Babel analogy! In that case God punished man by giving him different languages that made man disperse. That was just a slap on the bottom for being so audacious. This time, we are approaching End Times.