> when it comes to tasks rooted in logic like number crunching or data processing, where it is more obvious how AI can do the tasks...
Assuming they are talking about LLMs, this statement makes me suspect the authors do not know how LLMs work. Their blurbs at the bottom of the article say they are all marketing professors so perhaps they are a bit out of their element here.
This suggests the marketing professors have an incorrectly optimistic view of LLMs, and that's why they were surprised that those who understood AI mechanics embraced it less.
Assuming they are talking about LLMs, this statement makes me suspect the authors do not know how LLMs work. Their blurbs at the bottom of the article say they are all marketing professors so perhaps they are a bit out of their element here.
Hmm, usually that might be a pretty good indicator that the thing ain't great, mate.
Since we're on HN, I expect a rationalist to tell us why we're so, so wrong?