AI is quite good for doing this ...
Discussion Summary
The discussion on the Audiogon Forum focuses on the utility and limitations of AI tools, particularly ChatGPT, in the context of High Fidelity (HIFI) audio systems. Participants share their experiences, highlighting both the benefits and drawbacks of using AI for audio-related advice, including system evaluation, gear comparison, and troubleshooting.
Key Points:
- Usefulness of AI: Many users found AI helpful for initial information gathering, equipment comparisons, and generating basic recommendations, appreciating its ability to provide quick and detailed information.
- Accuracy Concerns: A recurring concern was the potential for AI to generate inaccurate or misleading information, including incorrect specifications or illogical recommendations. Users emphasized the need to verify AI-generated content with other reliable sources.
- Subjectivity vs. Objectivity: AI tools were noted to struggle with the subjective nature of audio evaluation and personal preferences, often relying heavily on data without fully accounting for individual listening experiences.
- Learning and Improvement: While some observed that AI can learn and correct errors over time through user feedback, there was also a caution that AI might inadvertently incorporate user biases as facts.
- Other AI Tools: Besides ChatGPT, other AI tools such as GROK, Google Gemini, and Perplexity were also mentioned in the discussion.
Areas of General Consensus:
The overarching consensus is that AI tools can serve as a valuable starting point for audio-related research, offering quick access to information and comparisons. However, it's crucial for users to exercise critical judgment, verify the accuracy and relevance of AI-generated content, and not solely rely on it. AI is viewed as an assistive tool rather than a replacement for human expertise and personal listening preferences.
User Participation
- Total Users Contributed: 33 distinct users participated in the discussion.
- Users Who Made More Than One Post:
- emergingsoul
- dunham_john
- triton20trx
- signaforce
- duckworp
- guscreek
- User with Highest Participation:
- emergingsoul had the highest participation with 4 posts.
Here's what I will contribute
I suggest if people are going to take AI seriously they need to be as serious about understanding how it works and its limitations. Key points from my experience are:
Ask it how it generated an answer, especially if it sounds questionable, and you will learn how it works (assumptions, inference etc).
Chat GTP only commits some things to the memory of your account, and you can ask it to save things to memory, but there is a limit in the free version memory capacity. The memory can be edited. There memory applies to all chats.
Otherwise, Chat GTP only 'remembers' your content in individual chats and says each chat is independent. It is possible to put quite alot into one chat as broad and deep as you want to go. You can correct errors within the chat and it will be corrected on that basis because every response in a chat reviews all information in that chat.
It uses a combination of info its been trained with and search results and can infer opinion and marketing as fact, it can make many things sound quite plausible even if they are quite wrong. Its probably possible to get it filter info from search results such as "respond only using your training and do do incorporate search results". "Outline all relevant search results, its source and credibility." Then you can choose what info it should use in its response, such as "review your response to consider search results xyz"
Do not trust it, interrogate it, test results in multiple models and AI tools, use it to gain new information and perspectives to consider.
Learn how to prompt and be prepared to interrogate. Sometimes I start a new chat with a prompt based on lessons learned from a previous chat on the same subject that have improved the prompts and question I ask, and information I provide in the 2nd attempt for a better result.
I suspect the reason many are free is because they're learning from people's questions, probably to inform what training to have or capabilities to develop, or to better understand people's questions. I imagine there is a huge amount of computing capacity that is used by free accounts and there'd have to benefit to whoever is paying that cost I doubt its learning from or being trained by people's responses. Ultimately I suppose the objective is to motivate more people pay subscription .. people decide to graduate from limitations of free versions.