Integrity
Basically I mean that whatever you submit (at work or at school) should be your thoughts, your ideas, and mostly, your words. Now, if you did use an AI for some of the work, then, by all means, cite the AI. Your integrity is based on whether people can trust you. That trust is built upon knowing that you are honest about what work is yours and what was done by another creative entity, be that human or AI. Are you giving credit to those that helped you along the way? That is integrity.
Validation of Response
If you are going to be held responsible for the work turned in (at work or at school), then you are responsible for accuracy of the report.
How do you go about validating your report (the output of the AI)?
1 - Be aware that the AI may "hallucinate" or produce wrong answers. Awareness is helpful. Awareness will hopefully keep you vigilante and on the look out for incorrect information.
2 - Look for illogical statements. Generative AI is trained to write fluently, not factually.* Look for illogical statements and contradictions within the response. Again, trained to write, not be factual. Also, generative AIs are prone to flattery. You may be wrong, but the AI will tell you that you are right.
3 - Check sources used by the AI. If your AI tool or chatbot lists the sources it used, check them out. Are the sources legitimate resources? Do they produce facts or fiction? Are they sources of information or entertainment?
4 - Check individual statements. Can you validate each individual statement? Because you are the one responsible for what is produced, you need to stand behind each statement made. The problem of using an AI for research is, you still have to do research to validate and correct the AI.
* I love this statement and want to give credit to Steve Hargadon of Library 2.0 for it. It is one of the best summations of the abilities of Large Language Models.