The internet can’t stop talking about an AI program that can write such artful prose that it seems to pass the Turing Test. College students are writing papers with it, internet marketers are using it to write marketing copy , and numerous others are just having earnest and fun conversations with it about the meaning of life. The AI chatbot in question is called GPT-3 , and it’s the latest iteration of a long project from the company OpenAI. Short for “Generative Pre-trained Transformer 3,” GPT-3 is what is known to computer scientists as a large language model (LLM). Yet all of this hullabaloo surrounding GPT-3 obscures one simple fact about LLMs: they are essentially text generators. Very complicated ones, indeed, but they are not “smart” in the human sense; and though they may sound like people that you are conversing with, this is all smoke and mirrors. There is no brain there. Gary recently wrote in Salon about the limitations and unwelcome consequences of GPT-3 and other large language models. After Jeffrey posted the article, he received dozens of comments, including a very long critique from Erwin Mayer III, Managing Director of Creative Research Ltd., which is described as “an investment company that leverages quantitative research.” Mayer’s riposte to the Salon story echoes a common sentiment of AI defenders, and is a particularly nice illustration of how our human instinct towards anthropomorphization can seduce us into believing that LLMs have human-like intelligence. Mayer writes: What makes you think […]
Click here to view original web page at AI chatbots learned to write before they could learn to think
© 2023, wcadmin. All rights reserved, Writers Critique, LLC Unless otherwise noted, all posts remain copyright of their respective authors.