Monday, December 23, 2024

Will AI models kill the market research industry? Not necessarily

In a world where LLMs can process and interpret vast diverse perspectives on any subject within seconds, what is the relevance of traditional market research agencies that rely on interviews or focus-group discussions with just a few hundred respondents to extract insights into business problems?

Have LLMs, with their limitless data and analytical prowess, sounded the death knell for the global market research industry? I decided to test this hypothesis.

In my three-decade long career, one of the most intriguing human behaviour problems I encountered was the trespassing accident problem in Mumbai. Every day in Mumbai, 10-12 lives are tragically lost as people attempt to trespass across its railway tracks.

This is the largest cause of unnatural death in Mumbai city. To understand the real reason why these fatal accidents happen, my team spent hundreds of hours, spread across several months, along the railway tracks of India’s commercial capital.

After spending a lot of time at numerous accident spots, it slowly dawned on us that all these locations were places where the visibility of oncoming trains from both sides was very good.

We also realized that track-trespassing zones where the visibility of an approaching train was very poor, due to some hindrance or the other, accidents were far less likely. This led us to a crucial question, why do people get hit by a train they can see coming towards them?

My team had to read several books and articles on how the human brain processes visual stimuli and then conduct several hours of discussion and debate to arrive at the right answer to this question at hand. The long research told us that people were getting hit by trains they could see coming at them because the human brain has a key deficiency.

The human brain underestimates the speed of a large object on the ground by close to 40%. This underestimation of the speed of an oncoming train, the result of a brain deficiency, was the main cause of Mumbai’s railway track accidents.

This was possibly one of the most surprising insight I have had while working on a human behaviour problem. It took my team several months of research to arrive at this unique insight. The crucial question for us today is whether superfast LLMs can provide the right answer to such a question in far less time.

To test the powers of an LLM, my colleague gave ChatGPT the question: “Why do people get hit by a train they can see coming towards them?”

Biting my nails, I waited for the answer. The answer came in a second or so. I scrolled down and started reading the answer that ChatGPT had just prepared for me. The answer had five points. The first point was about trespassers being overconfident.

No surprise in that, nor in the second point. Then, there was the third point: “Size-Speed Illusion—Trains appear larger and slower than they actually are due to the brain’s difficulty in estimating the speed of large objects moving toward us.” Wow. So, there it was. ChatGPT’s answer was spot on.

The answer that took my team several months of effort to find, an LLM had collapsed into just a second. What does this mean for the future of market research? Does this mean that LLMs can effectively replace our existing long-drawn market research processes?

No doubt, these models can be considered good storehouses of the world’s past knowledge. So, one can safely assume that LLMs should be able to give the best possible answers, provided relevant questions are asked of them.

In the case of our trespassing problem, one could have asked a general question like “What causes trespassing accidents in Mumbai?” The answer to such an ordinary question would have been along expected lines.

But truly unique insights into human behaviour are unearthed once the questions raised are as unique and evocative as: “Why are people getting hit by a train that they can see coming towards them?”

So, how intelligently questions for AI chatbots are framed has become more important than ever. How does someone consistently come up with such intelligent questions?

Intelligent questions come from a deep understanding of the context of the problem. It is while standing at those accident spots that your brain starts seeing patterns between high-risk spots and the visibility of trains. Your brain also compares those risky locations with other trespassing locations where accidents do not happen.

You notice that the visibility of an oncoming train is very poor in safer locations. It was from the churn of processing all these readings of fact that the critical question popped up.

If we had not spent time understanding the real context of the problem on the ground, it is unlikely that we would have come up with an intelligent question.

In general, librarians know a lot less than those who read all the books in the library and make the most sense of them. No doubt, LLMs are the best storehouses of all the answers in the world and AI tools draw upon these resources.

But the future belongs to those who develop the ability to ask the most evocative and intelligent questions of LLM-based AI tools.

#models #kill #market #research #industry #necessarily

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles