We are at a point where our artificial intelligence (AI) systems—namely large language models (LLMs)—are becoming increasingly convincing while still suffering from fundamental flaws that have been pointed out by scientists on different occasions. And while "AI fooling humans" has been discussed since the ELIZA chatbot in the 1960s, today's LLMs are really at another level.
However, "sentience" and "consciousness" is not the best discussion to have about LLMs and current AI technology. A more important discussion would be one about human compatibility and trust, especially since these technologies are being prepared to be integrated into everyday applications.
View Full Article