I think one needs to be very cautious with AI. There are way too many stories of AI just making things up. A few months ago a lawyer got in big trouble with a judge when he used AI to research case law for a trial. It turned out all of the cases cited by AI were just made up.
Another two stories. First, at my workplace, I asked Chat GPT about a situation in which I'm our resident specialist. AI cited out-of-date, incorrect info but presented it as current. When I challenged it on that point, only then did it admit it did not have access to the current info. Left unchallenged, I would have thought I had the right answer when I didn't.
In another situation, I was recently talking to a fellow who thought he was being stalked by the post office because he was seeing mail trucks everywhere. It was a mental obsession with no basis -- he never worked for the post office nor had any dipsute or encounter that would have explained why the PO picked him. However, he had been asking AI about the stalking and the program was actually encouraging him in the delusion -- he showed me the AI responses on his phone. Unbelievable that AI would actually encourage a person with mental issues in their delusions.
AI has its place in the world, but one still needs to be careful when using it.