As I plunged into the MR world, after a decade of working on computational physics and artificial intelligence, the first thing that struck me was the tremendous amount of technological overhang. Meaning, the solutions dominating the market were laden with inefficiencies that current technology already had solutions to address. So, with a focus on artificial intelligence, I set out to break down the primary functions of market research and plot the trajectory that artificial intelligence (AI) fueled disruption was likely to take.


First, advances in natural language processing (NLP) and machine learning models will lead to the automation of data analysis (and already are!). Second, advances in natural language generation and other deep learning techniques will drive the automation of data collection via conversation with humans and passive data digestion. Third, with data automatically collected and analyzed, AI will learn by example how humans turn these insights into reports; leading to automation of report generation.

Lastly, building on the layers of automation described above, AI can begin to generate and understand the landscape of possible actions to take. Then, using the insights derived from the data available, it will map those actions to the most likely outcomes. After weighing those outcomes against the desired goal, the AI will recommend the action, or course of actions, which yields the highest probability of achieving the desired goal.

Along this path it is important to understand the role a ‘goal’ plays — AI can be thought of as a machine which optimizes variables to achieve a specific goal. The power of AI is growing and its ability to more perfectly achieve goals is increasing. Thus, we may use this paradigm as a lens to consider the morality of specific goals by considering what would happen if those goals were achieved to perfection.

Market Research AI Goals

For market researchers three broad categories of goals stand out: the first is to better understand the needs of consumers so that a product can be developed to better meet those needs; the second is to better understand how to accurately communicate a product offering to consumers so that they know how well it meets their needs; the third is to understand the psychology of consumers so that they may be manipulated into buying products that they do not necessarily need.

If we extrapolate these first two goals we arrive at a world where companies produce products which perfectly meet the needs of consumers and where consumers perfectly understand which products best meet their needs. In this world, products are produced when a need is identified, and these products quickly evolve along with consumers’ needs.

In contrast, if we extrapolate the third goal we get a world where people’s core beliefs about themselves and the world they inhabit are systematically distorted in order to maximize the amount of products they buy (or who they vote for). In this world, the focus is not on products which meet needs, but rather on how to manipulate people’s psychology to believe they need to buy a product which a company already produces.

To me it is clear that building towards one of these worlds is morally permissible and one is not. One makes human lives better, while the other maximizes profits. With this in mind, I believe it is important that we as an industry of market researchers are aware of the consequences of the goals we work to achieve. We hold a key role in shaping the future world our children and grandchildren will inhabit and it is important we take this responsibility seriously.

Andrew Konya, CEO






Andrew Konya is the founder and CEO of Remesh.   A computational physicist by training, he has spent the past 8 years developing and applying artificial intelligence and machine learning algorithms to problems in material science, bio-sensing, traffic, image analysis and language.  His most recent focus is on developing artificial intelligence to engage and understand large crowds of people with Remesh.

Find out more at Remesh at



  1. I have to admit it took two readings for me to catch up to your thesis, but the title was intriguing so I stuck with it, yay me! I do think I get what you’re driving at and it is an important question and really, I believe one that advertising has been struggling with since at least the era of the “Mad Men.”

    My inner punk rocker says, “F’em all! I don’t need advertising to tell me what to think, and adding AI to that just scares the crap out of me.” Basically amplifying your heartfelt conclusions. But putting that aside…

    I’d further your postulation like this: When you say, “…considering what would happen if those goals were achieved to perfection.” where understanding the consumer’s needs and communicating how a product meets their needs equals an ideal outcome. However the AI is evaluating human emotional states to a perfect conclusion, but by their nature human reaction isn’t always – rarely even, rational, and therefor it’s imperfect.

    Utilizing a tool like Remesh does advance the ability to guess more accurately what people will desire given the right stimulus. So how to use the tool is the moral question, I think.

    As you say the tool might be used to, “…understand the psychology of consumers so that they may be manipulated into buying products that they do not necessarily need.” So is that too cynical? Maybe, but people are more complicated on both sides of that equation. I know that there are many times that I fall in love with a product or a brand and really don’t care to analyze why. Certainly there are times when I’m manipulated into craving a product (Apple wireless earbuds I’m looking at you). And I know that’s what is happening, but I don’t care, or more accurately I rationalize why I “need” them, or in some cases I really do need them and don’t “know” that I do until after I’ve made the emotional connection.

    So what’s the solution then? I think the future is as broad and inclusive as big data, but it also requires a very granular approach like ethnography, where you sit down with one-person, eye-to-eye and really understand the person too. Those are only the inputs though; the morality will be how we choose to use the tools we’re given.

    At least that’s what my inner punk rocker is comfortable with.