...the founder of Uber has said that he feels like working with a chatbot has helped him come close to making new discoveries in theoretical physics. Musk thinks that Grok hallucinating information about material science (or to be more generous, Musk simply doesn’t know how to look shit up in real books) that he has never seen in books is an indicator that Grok is thinking.
One of the lessons of LLMs is that faking coherence is not that difficult. Even when LLMs hallucinate, they sound convincing when they do so.
...people treat these word calculators as friends and therapists.
...we should be much more serious about regulating these systems and in thinking about what intelligence really means.
from this post in Daily Kos
No comments:
Post a Comment