Skip to main content

Faculty Insights

WSB Faculty Share Research on Generative AI

Three professors spotlight works-in-progress at AI-focused event

By Wisconsin School of Business

November 3, 2023

Professor Neeraj Arora presents during an AI applied learning event at Grainger
Professor Neeraj Arora shares his insights on generative AI during the opening session. Photo by Joshua Prado/Wisconsin School of Business

Wisconsin School of Business faculty are actively researching the use of generative artificial intelligence (AI) in a range of disciplines. Some of this work was in the spotlight recently as part of an applied learning day for Business Badgers. The event focused on generative AI—an area of AI that can create images and other media—and large language model applications like ChatGPT and was sponsored by WSB and co-led with industry partners.

Vallabh “Samba” Sambamurthy, Albert O. Nicholas Dean of WSB, delivered the opening remarks, welcoming students to the event designed to give a foundational introduction to AI models and their application, the changes and challenges associated with the increased prevalence of AI tools, and the impact of AI on the future of work.

Students benefited from presentations and panel discussions with top industry representatives including: Maruthy Vedam, director of custom chip development, Google; Levent Koc, principal engineer, Google; Glenn Fung, VP, head data science and machine learning, Liberty Mutual Insurance; Scott Culpepper, general counsel, Mailchimp; Dennis McRae, managing partner, Velocity AI; Ben Hayum, University of Wisconsin–Madison senior, Wisconsin AI Safety Initiative.

Three faculty members from WSB’s departments of marketing and operation and information management shared insights from current works-in-progress relating to generative AI. Below are some takeaways from that ongoing research:

Improving market research

Traditional marketing research of past decades has relied on interviews, focus groups, and surveys. Today, researchers can add social media data to that mix by including social posts, video transcripts, and other user-generated content. The result is a wealth of rich textual data to analyze—and that’s something large language models can help with, says Ishita Chakraborty, Thomas and Charlene Landsberg Smith Faculty Fellow and an assistant professor of marketing.

Chakraborty is currently working on a project with Neeraj Arora, Arthur C. Nielsen, Jr. Chair in Marketing Research and Education and professor of marketing, and Yohei Nishimura (MS ‘23) on how large language models can be used to automate the monitoring of market research data. Previously, researchers needed to use natural language processing (giving computers the ability to mimic and reproduce speech patterns through inputting examples the computer model can replicate) to detect a consumer failure, such as a missed flight or a stolen credit card, when working with online customer feedback data. Now, ChatGPT can cut through a significant amount of the human labor involved.

“It’s much more cost effective now because that question [of urgency] can probably easily be answered by just asking the large language model, ‘Hey, I have this consumer query. I missed a flight. I need help now. Is this urgent?’” Chakraborty says.

A second tier of the study looks at data generation and the in-depth interview used by marketing firms, which requires multiple steps including survey design and monitoring of the execution.

“The most important thing is you need respondents, and you need quality respondents that have to be different—they have to come from different demographics,” says Chakraborty. “[Our research asks] which arms of this whole design can we really automate? For example, can we have synthetic respondents who can be prompted to behave like certain people and give you answers? Or a hybrid where real people are still taking the surveys, but the large language model is the one moderating this interaction? We’re experimenting with these different arms.”

Identifying what to generate

Generative AI is now capable of producing many types of content usable in advertising, from images to music.

As a creator, given the proliferation of images on the web and the existence of generative AI, the tricky part is deciding which content is best to use. Standard A/B testing (comparing two versions of a prototype to see which one works best) is not a realistic approach when billions of variations can be produced in one click.

Remi Daviet, assistant professor of marketing, proposes to leverage the power of several types of AI to solve this problem. The goal is to efficiently train a second AI to predict the performance of each ad variation and to recognize the best performers. By carefully generating batches of ads to be informative and testing them, the predictive AI rapidly learns to identify ads that are appealing to viewers. 

The challenge is that training the predictive AI is expensive and demands ad performance data, says Daviet. This is why a method is developed to maximize informativeness, thereby drastically reducing the number of ads that need to be tested in the field. When the algorithm identifies that a type of design does not work, it stops generating variations of this design and instead focuses on the more promising ones. 

“We train the predictive AI to learn what is a good ad and to predict their performance,” says Daviet, who is working with his team on deploying the model and testing it in the field.

For the study, Daviet and Nishimura are working with a Japanese travel agency to design and test imagery that creates a desire to explore the world.

“We are thinking that maybe if we can generate landscapes in ads that create this desire to travel, we can be more effective,” Daviet says.

Early results suggest that his approach is able to generate ads that outperform 99.9% of their pairs after fewer than 50 designs tested. 

Exploring whether to ban generative AI in Q&A communities

Qinglai He, assistant professor of operations and information management, wanted to buy a gift for a friend’s wedding recently, but instead of a registry, the bride informed her that she and her future husband were asking only for cash. New to this tradition, He later wondered: How much is appropriate?

It’s the kind of question you could jump on Quora, a knowledge-sharing online platform, to ask and ideally get a satisfactory answer from another (human) Quora user. But with the advent of Quora and other similar sites now offering ChatGPT responses along with user-generated content, the online platform landscape has shifted. Such changes are the context for He’s latest research-in-progress on what happens when generative AI is banned in online community forums like Quora that have traditionally branded themselves as human-only sites.

“The very practical question that I want to answer in this study is, should AI be possible for platforms that rely on user-generated content?” He says. “Should they embrace or ban AI-generated answers?”

Using data from the online platform, Stack Exchange, a Q&A network of over 180 communities, He compared data before and after banning the artificial intelligence-generated content (AIGC) across three main areas: knowledge demand (quantity of questions posed); knowledge supply per question (how many responses to each question); and the efficiency of knowledge provision (the process or turnaround time to get a satisfactory answer).

With AIGC banned, demand for questions rose by 13%. Knowledge supply—the number of responses to a question—did not change significantly. Notably, the efficiency of knowledge provision decreased by 3.3% after the ban.

“Our findings suggest that banning AI-generated content has double-edged sword effects,” says He, “and imposes an important tradeoff between knowledge demand and knowledge provision efficiency on Q&A websites.” 

The effects are greater for non-STEM communities, she says. For example, an ethics question with grey areas on human behavior might be more desirable coming from a human respondent. However,  answerers may tend to rely more on ChatGPT to generate responses to those questions—and community managers in online platforms can assist in fielding which question goes where and implementing appropriate policies that align with those decisions.


Tags: