Artificial Intelligence (AI) and this Website
Artificial Intelligence (AI), like tools that preceded it, is both very powerful and can do some very useful things, but also dangerous and can do much harm. For example, just as the Internet with its enabling access to massive amounts of information opened the door to tremendously useful functionality in virtually all domains but at the same time invigorated very harmful capabilities like widespread pornography and identity theft, AI can be a tremendous help if used properly or a great danger if used improperly.
While AI has uses that go far beyond what is discussed here, this article narrows the focus of AI to the functionality exposed via this website. In particular, this article addresses how best to use AI within a website of this type.
This website is a read-only website whose purpose is to enable access to the various LCMS resources (churches, the Bible, the Book of Concord, the hymnal, etc.) in a straightforward, user-friendly fashion. As such, the use of AI within this website is intended to assist in this process of information access. Therefore, AI is used to answer questions, to summarize material, to point to specific articles, etc. Thus, in its current incarnation, this website constrains its use of AI to that supported by the various chatbots (Perplexity, ChatGPT, Grok, etc.), which are geared toward doing these types of activities (answer questions, summarize material, etc.).
While search engines and AI chatbots are more than capable of answering most questions correctly, when those questions involve a philosophical or religious component the answers given by the search engines and AI chatbots are flavored based on the underlying assumptions (i.e., bias) within those programs. When facts alone don’t define the answer, then a set of assumptions are required or the search engine/chatbot will simply respond that it doesn’t know the answer (which almost never happens).
By default, the assumptions used by search engines and AI chatbots are secular, presuming that science is always correct. When asked about the age of the earth, the search engine/chatbot will answer based on scientific dating methods. When asked how man came into existence, the answer will be based on evolution being correct. Likewise, for the myriads of other questions that involve some component of philosophy or religion, the answers will almost always be based on science being correct and denying the existence of God and supernatural phenomena.
So, given this tendency toward secular thinking, that science is always right, how does a website like this one use these tools effectively? The answer lies in changing the context, the underlying set of assumptions, used by these programs to answer the questions. Instead of the default set of secular assumptions, a context must be given to the search engine/chatbot that will flavor the responses based on that context.
So, what context should be used? What set of underlying assumptions, often referred to as truth claims, should be used to get useful responses? What claims that are assumed to be true will result in the right results?
Since this is an LCMS website, first and foremost the Bible must be assumed to be correct (inerrant). In addition, the Book of Concord should be assumed to be a correct expository of the Bible. Other documents derived from the Bible and Book of Concord that agree with, but expand upon, those documents and are accepted by the LCMS should also be assumed to be true.
Given that the documents mentioned in the previous paragraph define the context for the desired answers from search engines and chatbots, how should this context be passed to those programs? Each question or request put forth to the AI chatbot could be prefixed with a specification of the context such as “Presuming that the Bible is inerrant, that the Book of Concord is a correct expository of the Bible, and that other LCMS-approved books that are based on, but expand upon, the Bible and/or Book of Concord are correct, then (question or request goes here)”. This verbose specification of the context, if properly understood by the search engine or AI chatbot, should produce responses that are consistent with this context. However, since the LCMS clearly teaches that the Bible is inerrant and that the Book of Concord is a correct expository, this context can be simplified to “According to the LCMS” and still produce similar results.
Now that the context for getting proper responses has been identified, the scope to which these programs should be used needs to be identified. Here the danger lies in overusing them to the extent that the responses given by them are treated as equivalent to the context; that is, the responses given by these programs, and especially chatbots, are accepted as true to the same extent that the Bible, the Book of Concord and other LCMS-approved documents are accepted as true. This overlooks the fact that these programs, especially the AI chatbots, are not infallible and can make mistakes. Therefore, responses from AI chatbots can be misleading, or, in very isolated cases, just flat out wrong.
This fallibility, even when given a proper context, should not invalidate the use of this technology any more than the flaws with the Internet (which has plenty of misleading and incorrect information) should prevent it being effectively used. However, like using the Internet, the proper use of AI chatbots (and, to a lesser extent, search engines) requires a certain amount of discernment.
Since not all users of this website will be seminary trained professionals, their ability to identify errors, especially theological errors, will be constrained by their understanding of LCMS theology. This begs the question “If a user’s theology is not completely sound, is it better to present him with answers that might (however small that chance) be wrong or not present him with answers at all?"
This website errors on the side of presenting the user with answers, even when those answers have a small chance of being wrong, rather than not presenting the answers at all. This is based on the logic that the theology presented in these answers is more likely to be correct than the thinking of the non-theologically trained individual otherwise and that the probability that the user will address his question(s) to someone with sound theological training (i.e., his pastor) is far less than that he will seek answers via this website.
In any case, questions/requests to AI chatbots are clearly identified as such in this website and, where possible, a disclaimer is added indicating that the response might not be correct and that the documents and videos in the “What we believe” section should be relied upon where conflict with an AI chatbot-generated answer occurs.