When Claude Shannon came up with his treatise on information theory, the one thing that was absent from his mind was the notion of semantic information. Information, he surmised, is in the eye of the beholder. The resulting technological revolution has always been grounded on these presuppositions: the utility of information is in the mind of the recipient. Information technology is thus not designed to serve as either a source or sink of the stuff of mind; rather, it is merely a conduit that may be applied to extract proxies to meaning (not meaning itself) for the end user.
I will readily admit that I was always skeptical of the possibility of true AI to appear for this very reason: in order for true AI to work it would have to be able to truly understand the content of the information not simply appear to have some understanding. The fact that attempts to encode or decode semantics had borne little fruit, to me, implied that this was an intractable problem that would never be satisfactority solved. However, what we have seen from tools such as ChatGPT is that, even though our understanding of how such artifices function is far from complete, they have exceeded what we thought was possible. The fact that AI giants such as Google and Meta and Microsoft were bypassed by a handful of AI-optimists is testament to the fact that they were not as optimistic as we may have supposed.
We now live in a world where artificial sources of meaningful knowledge are part and parcel of our workflow. I routinely use Github Copilot and am constantly astonished at how keen it is to what I am doing; it clearly is observing what I do and makes suggestions (not always right) but surprisingly helpful. It's like having a lab assistant who almost always has the right tool at the right time within your arm's reach.
The time is now here in which we should expand our approach to working with machines to think, not just of the information they can manipulate, but of the knowledge they can mould. We are in the era of knowledge technology (KT).