If you ask it, its answer is “Yes.” If you ask it if it “understands” Chinese, its answer is again “Yes” without hesitation. Searle’s 1980 Chinese Room argument is more relevant than ever in the age of LLMs:
𝘚𝘶𝘱𝘱𝘰𝘴𝘦 𝘢 𝘮𝘰𝘥𝘦𝘭 (𝘣𝘰𝘹 𝘪𝘯 𝘵𝘩𝘦 𝘱𝘪𝘤𝘵𝘶𝘳𝘦) 𝘵𝘩𝘢𝘵 𝘣𝘦𝘩𝘢𝘷𝘦𝘴 𝘢𝘴 𝘪𝘧 𝘪𝘵 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥𝘴 𝘊𝘩𝘪𝘯𝘦𝘴𝘦. 𝘐𝘵 𝘵𝘢𝘬𝘦𝘴 𝘊𝘩𝘪𝘯𝘦𝘴𝘦 𝘤𝘩𝘢𝘳𝘢𝘤𝘵𝘦𝘳𝘴 𝘢𝘴 𝘪𝘯𝘱𝘶𝘵 𝘢𝘯𝘥 𝘱𝘳𝘰𝘥𝘶𝘤𝘦𝘴 𝘰𝘵𝘩𝘦𝘳 𝘊𝘩𝘪𝘯𝘦𝘴𝘦 𝘤𝘩𝘢𝘳𝘢𝘤𝘵𝘦𝘳𝘴 𝘢𝘴 𝘰𝘶𝘵𝘱𝘶𝘵. 𝘛𝘩𝘪𝘴 𝘮𝘰𝘥𝘦𝘭 𝘱𝘦𝘳𝘧𝘰𝘳𝘮𝘴 𝘪𝘵𝘴 𝘵𝘢𝘴𝘬 𝘴𝘰 𝘤𝘰𝘯𝘷𝘪𝘯𝘤𝘪𝘯𝘨𝘭𝘺 𝘵𝘩𝘢𝘵 𝘪𝘵 𝘤𝘰𝘮𝘧𝘰𝘳𝘵𝘢𝘣𝘭𝘺 𝘱𝘢𝘴𝘴𝘦𝘴 𝘵𝘩𝘦 𝘛𝘶𝘳𝘪𝘯𝘨 𝘵𝘦𝘴𝘵: 𝘪𝘵 𝘤𝘰𝘯𝘷𝘪𝘯𝘤𝘦𝘴 𝘢 𝘩𝘶𝘮𝘢𝘯 𝘊𝘩𝘪𝘯𝘦𝘴𝘦 𝘴𝘱𝘦𝘢𝘬𝘦𝘳 𝘵𝘩𝘢𝘵 𝘵𝘩𝘦 𝘮𝘰𝘥𝘦𝘭 𝘪𝘴 𝘪𝘵𝘴𝘦𝘭𝘧 𝘢 𝘭𝘪𝘷𝘦 𝘊𝘩𝘪𝘯𝘦𝘴𝘦 𝘴𝘱𝘦𝘢𝘬𝘦𝘳. 𝘛𝘰 𝘢𝘭𝘭 𝘰𝘧 𝘵𝘩𝘦 𝘲𝘶𝘦𝘴𝘵𝘪𝘰𝘯𝘴 𝘵𝘩𝘢𝘵 𝘵𝘩𝘦 𝘱𝘦𝘳𝘴𝘰𝘯 𝘢𝘴𝘬𝘴, 𝘪𝘵 𝘮𝘢𝘬𝘦𝘴 𝘢𝘱𝘱𝘳𝘰𝘱𝘳𝘪𝘢𝘵𝘦 𝘳𝘦𝘴𝘱𝘰𝘯𝘴𝘦𝘴, 𝘴𝘶𝘤𝘩 𝘵𝘩𝘢𝘵 𝘢𝘯𝘺 𝘊𝘩𝘪𝘯𝘦𝘴𝘦 𝘴𝘱𝘦𝘢𝘬𝘦𝘳 𝘸𝘰𝘶𝘭𝘥 𝘣𝘦 𝘤𝘰𝘯𝘷𝘪𝘯𝘤𝘦𝘥 𝘵𝘩𝘢𝘵 𝘵𝘩𝘦𝘺 𝘢𝘳𝘦 𝘵𝘢𝘭𝘬𝘪𝘯𝘨 𝘵𝘰 𝘢𝘯𝘰𝘵𝘩𝘦𝘳 𝘊𝘩𝘪𝘯𝘦𝘴𝘦-𝘴𝘱𝘦𝘢𝘬𝘪𝘯𝘨 𝘩𝘶𝘮𝘢𝘯 𝘣𝘦𝘪𝘯𝘨. 𝘐𝘯 𝘵𝘩𝘪𝘴 𝘤𝘢𝘴𝘦, 𝘥𝘰𝘦𝘴 𝘵𝘩𝘦 𝘮𝘢𝘤𝘩𝘪𝘯𝘦 𝘭𝘪𝘵𝘦𝘳𝘢𝘭𝘭𝘺 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥 𝘊𝘩𝘪𝘯𝘦𝘴𝘦? 𝘖𝘳 𝘪𝘴 𝘪𝘵 𝘮𝘦𝘳𝘦𝘭𝘺 𝘴𝘪𝘮𝘶𝘭𝘢𝘵𝘪𝘯𝘨 𝘵𝘩𝘦 𝘢𝘣𝘪𝘭𝘪𝘵𝘺 𝘵𝘰 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥 𝘊𝘩𝘪𝘯𝘦𝘴𝘦?
More recently in his book, Searle linked his original argument to consciousness, but that’s probably a higher bar than needed to reason that ChatGPT is a box that has no idea what it’s talking about.