It’s actually not easy to ensure that an LLM will cite a correct source, in the same way it’s not easy to ensure that it will provide accurate information. It’s based on token probability, not deterministic lookups of “this data came from this source.” It could entirely make something up, then write “Source:” and then probabilistically write “Wikipedia” because those tokens commonly follow those for “Source.”
If you have an AI bot that looks up information in real time, then that would be easy. But for a trained LLM, the training process is highly destructive. Original information is not preserved except in relationships based on probability.
It’s a fun toy. It’s not a research aid, it’s not a productivity tool, and it’s not particularly useful in the workplace.
It’s honestly very similar to the VR craze of a few years back. Silicon Valley invented a fun toy and then tried to convince everyone that it would transform the workplace. Meetings in VR and simulated workstations and all that. Ultimately everyone figured out that VR is completely useless in the workplace and Silicon Valley was just trying to find ways to sell their fun toy. Now we’re going through the same learnings with AI.