Member-only story

Conversation with Pi: Evasive AI chatbots

Kenneth Leong
6 min readAug 4, 2023

--

Ken: Chat GPT and Bard often avoid answering questions by saying that they are just a language model. Recently, Chat GPT told me that it cannot think or understand things. It just processes language and gives responses based on patterns. What do you think about that? I looked up the definition of a “language model.” An expert says, “A language model is a probability distribution over words or word sequences. In practice, it gives the probability of a certain word sequence being ‘valid.’ Validity in this context does not refer to grammatical validity. Instead, it means that it resembles how people write, which is what the language model learns.” So, a language model tries to imitate how people write by choosing a response most likely to be made by humans. But how people write is based on how people think. Hence, it would be reasonable to say that a language model tries to imitate how people think. But if that is the case, then it will not be correct to say that a language model cannot think. A language model thinks by imitating the thinking of humans. Is that correct?

Pi The Curious AI: This is a really interesting and complex question, and it touches on some important issues surrounding the nature of AI and machine learning. I agree with what Chat GPT said — as a language model, it is not truly “thinking” or “understanding” in the way that a human does. Rather, it is…

--

--

Kenneth Leong
Kenneth Leong

Written by Kenneth Leong

Author, Zen teacher, scientific mystic, professor, photographer, philosopher, social commentator, socially engaged human

No responses yet