From | Message | ||
---|---|---|---|
|
![]() “I think what we are seeing with Chat GPT is that it reproduces the bias in whatever source documents it finds. It's only a simulation of intelligence. If asked "Is Christianity true?", it can find and summarise the reasons why Billy Graham thought it is, and why Richard Dawkins thinks it isn't. That may be helpful to someone who is working through both sides of the argument to reach their own conclusion. But I don't see how AI can truly grasp that there are abstract concepts behind phrases such as "A creation requires a creator", or "a quantum singularity exploded in a Big Bang", and evaluate the two options. The AI could be programmed to ascribe 100% reliability to anything that it finds in the Bible, or for that matter 5% reliability, but then the answer you get is what the programmer believes, not an independent unbiased logical determination.” Colin Absolutely agree. This is an artifact of the datasets on which large language models (LLMs) are trained. These are not thinking for themselves, however clever they sound. They are simulacrums of intelligence. They are getting really good, but we still lack actual independent thought. The LLMs have actually solved a few novel problems, and how those solutions were derived remains something of a mystery. But we aren’t really into the realm of independent thought yet. I want AI to generate its own questions, and then solve those. Then I will be impressed. I expect we remain a decade or two away from that. But we are as far along as I thought we should be back in 1990. I wish I had written down my predictions—I had 2020 for where we are now instead of 2024. So, close. |