Dud? Or the devil’s own weapon?

April 28, 2026 | Silver Spring, Maryland, United States | Shane Anderson

“Many shall run to and fro, and knowledge shall increase” (Dan.12:4). Whether or not the prophet Daniel had artificial intelligence (AI) specifically in mind when he penned those words is debatable. But in the grand scheme of the end of time, it’s difficult not to see at least some connection. AI, after all, at least in its current configuration, is the accumulation of vast amounts of information into single gigantic engines of “knowledge.”

And for what purpose? Ostensibly, these vast computing systems aim for the betterment of humanity through cured diseases, helping end extreme poverty,[1] and even ushering in a new era of brotherhood and peace.[2] For AI’s more extreme advocates, the millennial fever William Miller combated in the nineteenth century may be returning as secular utopianism in the twenty-firstcentury.

Of course, not everyone is sanguine about AI’s goodness. Some, including many Christians, find it far easier to envision AI’s capacity for disaster, including with regard to end-time prophetic fulfillment. Many, for instance, see AI taking a significant role in the establishment of end-time Babylon—a global union of church and state, presumably requiring substantial computing power, including its enforcement of the mark of the beast. Whatever good AI might do in the meantime, many say, beware how the story ends.

Three Cautions

I’m no AI expert. But like many of you, I have read on the topic, watched my share of relevant documentaries, news pieces, etc. This informal research has led me to at least three cautionary conclusions about AI and the end of the world.

First, there is little doubt that AI, even in its somewhat incubatory state, is capable of substantially impacting end-time events. I am not saying it will substantially impact them. But the capacity is clearly already here. The computational muscle inherent in AI is well suited, for instance, to surveilling, influencing, and potentially deceiving population groups on a global scale—again, something that could conceivably be essential for actualizing the aforementioned mark of the beast scenario. Thoughtful Christians would thus do well to keep AI developments on their radar, rather than dismissing them as technological snake oil.

Second, it is crucial to remember that for all its potential for good or ill, AI remains a mere means to an end. It is the mechanic’s tool, not the mechanic. I point this out because, though in the minority, I have personally heard well-intended Christians declare that artificial intelligence is the end-time antichrist power, and that Christians everywhere ought to be fighting against any AI development to stave off the apocalypse.

But the Bible clearly indicates otherwise. True, there are legitimate reasons to be concerned about the development of AI. But no one will be lost at the end of time directly because of AI; they will be lost because of what they have done with Jesus. Might AI negatively influence what someone does with Jesus? Certainly. But as any gunner in time of war will tell you, being even a few degrees off in the identification of any enemy can lead to distraction, wasted resources, and at times great loss. Satan and sin—not AI-wielding computing arrays—are the real enemy. Paul declared that our fight is “against principalities, against powers, against the rulers of the darkness of this age, against spiritual hosts of wickedness in the heavenly places” (Eph. 6:12). Even if all traces of AI were somehow eliminated today, the great controversy would still rage on unabated. Maintaining the right distinction between the enemy and the enemy’s weapons can increase our success in advancing Christ’s kingdom in these last days.

Third and last, while God alone knows the future, the real risk of AI today, in my opinion, is not primarily apocalyptic, but far more mundane, and that in at least two ways.

First, the inappropriate use of AI may be diminishing our aptitude for critical thinking. One prominent study recently found “a significant negative correlation between frequent AI tool usage and critical thinking abilities.”[3] The author of the study rightly pointed out that this decrease is not inherent to AI use, but rather to its misuse: using AI as a total replacement for human analysis, reflection, or problem-solving regimens. In other words, if AI writes your term paper for you, you may get a good grade, but it’s likely that your linguistic/communication skillset will have been diminished in the process.[4] And the longer this kind of dependence on AI goes on, the greater the negative effects. The resulting loss of analytical reasoning and independent thinking bodes ill not only for secular concerns but also for spiritual ones. One does not have to be a genius to be saved. But Christ’s invitation to “come . . . , let us reason together” (Isa. 1:18) clearly shows that a sound mind is a valuable asset to the spiritual seeker.

Second, AI is dramatically increasing the proliferation of bad ideas and our susceptibility to them. Millions of teens and young adults in the United States, for instance, use AI as a ready source for dealing with mental health issues. Of teens ages 13-17, 72 percent have used AI companions,[5] and a third of users choose AI companions over humans for serious conversations.[6] The results? While some AI-generated mental health advice is accurate and helpful, much of it can also be harmful or even catastrophic.[7] Moreover, the explosion of AI-generated images and video, while a boon for many honest online content creators, is also becoming a propagandist’s dream come true. Deepfakes, fake news articles, and bot-driven chat and blog responses are making it increasingly difficult to discern what truth is.[8] For a movement such as Adventism, which is built on the concept “You shall know the truth, and the truth shall make you free” (John 8:32), this is an ominous development—both at the very end of time and in the day-to-day grind we live in now.

What to Do? 

In my opinion the ongoing rise of AI calls Adventists to do at least three things. First, as stated above, we must stay informed about AI’s development. Second, we must embrace the undeniable good that AI can and already is doing, particularly in medical care and other efforts to relieve human suffering. AI is not going away; let’s affirm what we can. Third, while recognizing AI’s strengths, we need to be proactive in putting in place appropriate constraints to prevent AI’s liabilities from proliferating. This includes personally encouraging those in our influence to rightly use AI. It may also call on us to add our voice to the growing number of Christians calling for healthier checks and balances on AI and its developers.

Knowledge, Rightly Leveraged

AI may indeed become a substantive part of end-time events. But as Seventh-day Adventists, let’s keep in mind that until then, its potential to distract from Jesus and the three angels’ messages is already in high gear. AI has led to more fulfillment of the prediction that “knowledge shall increase” than perhaps anything else in human history. Let’s do what we can to ensure that knowledge leads to Christ rather than away from Him.


[1] See also University of California, Berkeley professor Joshua Blumenstock’s work in this area at https://www.jblumenstock.com/files/papers/EOP.pdf .

[2] See also Usanas Foundation, “Art and Artificial Intelligence: A New World Order,” Sept. 2, 2025, accessed at https://usanasfoundation.com/art-and-artificial-intelligence-a-new-world-order .

[3] M. Gerlich, “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking,” Societies 15, no. 1 (2025): 6, https://doi.org/10.3390/soc15010006.

[4] For a fascinating study on this very scenario from Massachusetts Institute of Technology, see “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task,” https://arxiv.org/pdf/2506.08872v1

[5] AI companions are AI-driven digital platforms that are designed to engage in lifelike conversation with users.

[6] See Common Sense Media’s research at https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf.

[7] K. R. Head, “Minds in Crisis: How the AI Revolution Is Impacting Mental Health,” Journal of Mental Health and Clinical Psychology 9, no. 3 (2025): 34-44; Adrian Preda, M.D., “Special Report: AI-Induced Psychosis: A New Frontier in Mental Health,” Psychiatric News 60, no. 10 (Sept. 29, 2025),

https://doi.org/10.1176/appi.pn.2025.10.10.5 ; Kashmir Hill, “A Teen Was Suicidal. ChatGPT Was the Friend He Confided In, ”New York Times, Aug. 26, 2025, https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html; “OpenAI, Microsoft, Sam Altman Sued for Wrongful Death in Murder-Suicide Case,” Axios, Dec. 11, 2025, https://www.axios.com/2025/12/11/openai-sam-altman-lawsuit-murder.

[8] John Villasenor, “Artificial Intelligence, Deepfakes, and the Uncertain Future of Truth,” Brookings Institution, Feb. 14, 2019. While I don’t agree with all her conclusions, see also Nadia Nafi’s “Deepfakes and the Crisis of Knowing,” Oct. 1, 2025, https://www.unesco.org/en/articles/deepfakes-and-crisis-knowing.