What surprises me in most of the ongoing AI discussions is how little the role of intent is playing there. For me the biggest differentiator between human and artificial intelligence is the notion of intent. (Besides the non-transferability of the domain knowledge of an AI agent.) Where humans do everything with an intent AI agents simply, well, execute an algorithm. Be it a neural network identifying whether there is a cat in a certain picture, a robot trying to bake an egg or drawing pictures in the style of Van Gogh. AI agents don't want to do those things, it's the only thing they can do once they are initiated. On a much deeper level some would argue that humans are not much more than behaviouristic pattern recognisers, but that discussion is still highly philosophical and not very relevant at the current state of technology. In case you are interested in these matters I highly recommend reading everything from the likes of Daniel C. Dennett and Steven Pinker. Back to intent. It is like in art, it's not that hard to make a painting that looks like it's from Gerrit Rietveld, but the difference is that he had an intent when drawing a picture. He wanted to convey a message, tell a story, inspire, impress. For the same reasons there are almost no pieces of music that were generated with AI that really touch peoples heart. While every piece written by Bach does. Because Bach had an intent with his music, just like Rietveld had with his paintings. Once people understand, consciously or unconsciously, the intent of a certain piece of art they can much easier relate to it and, thus, be touched by it. Agents can imitate and mix from a huge pile of content but they don't do so with intent.
Surprisingly, and no pun intended, missing out on intent is also quite human. Everybody reading this must have been at a gallery at some point in their live, staring at an abstract painting, saying that his nephew of 5 could have created something similar as well. True, he could have drawn something very similar, but he wouldn't do so from a similar intent making the difference between a piece of art and a doodle by a 5-year old.
The same goes for AI agents when it comes to recognising things. They can only explain what has happened, not why it happened. It knows nothing about the intent that might have caused the phenomena. And if there was no intent it has no way of seeing the bigger picture that might have lead to the phenomenon. The latter is the other blind spot of the AI community, that an AI can look no further than the data it was trained on. They lack world knowledge and transfer learning between completely different domains is still one of the holy grails of AI, and will be for decades to come.
So an AI agent can't create something with intent nor recognise the intent driving a certain phenomena. That doesn't mean they are useless, au contrair, but it is good to be well-informed about both the possibilities and limitations of AI and what causes those.