Skip to main content

Operations and Information Management Seminars

Innovation Mini-Conference

Saif Benjaafar (UMich), Bin Hu (UTD), Jeffrey Hu (Purdue), Sebastian Steffen (BC)

April 23, 2026 | 1:30 pm

The OIM department hosts some of the foremost researchers in Innovation in Operations and Information Systems during a two-day mini-conference, April 23 & 24, 2026.

April 23, 2026

Bin Hu

Professor of Operations Management

University of Texas at Dallas

Content Business Models in the Era of Large Language Models: Licensing, Creation, and Regulation

2:15 – 3:15pm, Grainger Hall 4151

Abstract:

Many Internet creators allow free access to their contents via search engines while monetizing page visit through advertisements. Large Language Model-based answer engines summarize contents for users, thereby eliminating visits to the original contents and threatening this business model. We study a new content business model where answer engines license original contents and charge subscriptions fees to users. We find that this model hinders original content creation except when laws do not require explicit access permission (opt-out) and creators do not license contents to answer engines. We also show when opt-in or opt-out laws yield higher social welfare.

Saif Benjaafar

Seth Bonder Collegiate Professor of Industrial and Operations Engineering

University of Michigan

Naor Revisited: Matching Queues with Strategic Agents

3:30 – 4:30pm, Grainger Hall 4151

Abstract:

We study a two-sided queue in which strategic agents arrive continuously over time on each side and decide whether to join by comparing their matching reward with the expected delay cost until matching. Unlike single-sided queues, where self-interested agents unambiguously overjoin (Naor 1969), entry in a two-sided queue generates two opposing externalities: a negative delay externality on the same side and a positive matching externality on the opposite side through market thickening. We show that, as a result, equilibrium participation may exhibit either overjoining or underjoining relative to the social optimum. We characterize equilibrium and welfare-maximizing joining thresholds and quantify the resulting price of anarchy. The welfare-maximizing solution can be achieved via prices (possibly negative on one side and positive on the other) while generating positive revenue. Interestingly, maximizing total welfare need not increase welfare on both sides; one side’s welfare may decline if offset by sufficiently large gains on the other side. Under profit maximization, a platform may optimally extract all surplus from one side. Despite these distortions, the welfare gap between profit maximization and social welfare maximization is bounded. Both a social planner and a profit-maximizing platform internalize cross-side and congestion externalities when setting prices.

Related paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6104866.

April 24, 2026

Sebastian Steffen

Assistant Professor of Business Analytics

Boston College

Technology Adoption and Innovation: Evidence from the Diffusion of Digital Technology Skills

10:00 – 11:00am, Grainger Hall 2520

Abstract:

This paper examines how the timing of technology adoption relates to firm innovation and performance using a novel innovativeness measure based on firms’ digital technology skill adoption patterns. Combining skill demands from firms’ job postings with financial data and technology release dates, I classify firms according to the Rogers’ (1962) diffusion of innovations framework based on when they adopt 248 emerging digital skills released between 2010 and 2022. I find that firms that consistently adopt new skills before their peers generate significantly more patents and higher patent value than their late adopting peers. However, the relationship is non-monotonic: “Early Adopters” outperform both “Innovators” (i.e. the earliest 2.5%) as well as late-stage adopters. Specifically, early Adopters produce 68% more patents and 46% higher patent value compared to Late Majority firms, while Innovators produce 75% fewer patents and 83% lower patent value relative to Late Majority firms. I also document strong peer effects in adoption patterns and find that more recently released technologies diffused significantly faster than other older technologies, suggesting accelerating technology cycles and increased firm awareness. These findings have important implications for understanding the rapid diffusion of AI skills: if the non-monotonic adoption-performance relationship holds for AI, the current race to adopt it may disadvantage the very first movers while benefiting those who follow closely behind. These results highlight the importance of adoption timing (not just adoption itself) for realizing innovation benefits and provide micro-founded evidence on how technology diffusion shapes firm-level outcomes in the digital economy.

Jeffrey Hu

Accenture Professor of Information Technology

Purdue University

Scaling Laws of AI Alignment: The Impact of Human

11:15am – 12:15pm, Grainger Hall 2520

Abstract:

Despite substantial investment, many organizations fail to generate value from Large LanguageModel (LLMs) deployments due to a ”value bottleneck.” This paper argues that value realization depends on alignment with organizational standards—aligning LLM outputs with the nuanced standards of key decision-makers. Through a randomized field benchmark exercise conducted with a consulting firm, we evaluated LLM performance in quality-check tasks using Retrieval-Augmented Generation (RAG). Our results demonstrate that scaling reasoning data (expert logic) is more effective than scaling the raw knowledge base, yielding performance gains of 10.8 to 37.2 percentage points. We identify a power-law relationship between reasoning data volume and performance, showing that smaller models with high-quality reasoning data can outperform larger models. These findings suggest that reasoning data enables LLMs to rapidly learn expert patterns, offering a strategic pathway for organizations to bridge the gap between deployment and value.