Posted 4 days ago

I'm not rejecting human input, i'm acknowledging the fact that certain type of specialist questions receive way less input 'overall' and that you are better, and certainly faster with the common AI of these days, especially with the right information background

Based on the experience that it's factually incorrect to claim, people won't post invalid answers on specialist questions. Opinions are like a-holes; everybody has one. Therefore, opinions (statistically) outweigh knowledge. An effect that is made worse by the scientific reality of humans favoring low cognitive effort.
Meaning: On basic knowledge questions, "crowd intelligence" works. But the more specialized the questions get, the dumber the crowd is. To give you an example, you might want to look up the "Mandella-effect".
LLMs answer based on statistics. Meaning, they give majority opinions more weight. Their answers depend on how often they have seen a similar output in their training data input.
Problem 1 with that is selection bias as a result of picking training data, favoring one source over another.
Usually, business people favor "cheap" options, like grabbing stuff from reddit, or crawling the public internet, where they find lots of "opinions", but not necessarily expert "knowledge". Typically accumulated with a blatant disregard for the copyright of others.
Problem 2 is unflagged LLM hallucinations copied (by humans or machines) from earlier AI sources.
While statistics always deteriorate over time, the moment unflagged AI output is used as AI input to train the next iteration, it speeds up that deterioration. Because, when you take a statistics-adjusted (hallucinated) output as input to train the next LLM, and then the next, so-on and so-forth, you are skewing the statistical model towards the statistical mean. An effect, that has been scientifically proven to break down and destroy LLMs within as little as 3-5 generations.
The gist of it is: Since the development of mass media, uninformed majority crowd opinions have had an increasingly bigger effect of drowning out minority expert knowledge by sheer volume of content. Trained AI cannot distinguish between the two, and treats all content as equally correct, making a bad situation worse.
Meaning, LLMs may help when you need a question answered that is common knowledge, or you are exploring a problem and don't know where to start. But, when you require a specialist answer, they cannot, by design, help you find the expert knowledge you might be looking for, as long as expert voices remain a statistical minority.