AI: Artificial Intelligence. What is it and to what extent will it affect both the business world and our everyday lives? While there’s no single agreed-upon definition, AI is often described as the process of simulating or replicating human thought and interactions.
We asked Wisconsin School of Business faculty to share their thoughts on what an AI-influenced future would look like. Some say it’s already here. Below are highlights from their responses.
Dani Bauer, Hickman-Larson Chair in Actuarial Science and an associate professor in the Department of Risk and Insurance
AI has already affected insurance in multiple ways. I think one of the biggest challenges associated with AI going forward is ethical considerations related to algorithms, such as algorithmic fairness and algorithmic biases. As predictions become smarter, is it sufficient to say, for example, that you cannot use race or gender in certain areas? How do you deal with the advance of AI in policy- relevant areas where we already have existing ideas? It is not clear how these ideas interact with AI and what to do about it.
Technology will replace jobs. Hopefully, those people will land elsewhere, and within a generation, you would just have people picking different jobs and the landscape of employment would change. I don’t necessarily see this as a good or bad thing.
The unique challenge that I see with AI relative to other technologies is that the interactions with policy are not well understood. There are questions like, how do you keep algorithms in check in order to satisfy certain constraints? Right now, there’s no obvious and clear solution.
Emily Griffith, Cynthia and Jay Ihlenfeld Professor for Inspired Learning in Business and an assistant professor in the Department of Accounting and Information Systems
Audit firms have been working on developing ways to use AI because it’s a big opportunity to eliminate some of the mundane tasks that take a long time and don’t require a lot of judgment. In auditing, the more important work requires lots of judgment, it’s pretty subjective, but the more voluminous, rote tasks are the opposite.
One of the examples that firms have told me about is that they are beginning to use AI to read contracts and pull out the key details. When I was an intern, that’s all I did for probably three weeks. I read these lease contracts for a company that had a bunch of retail locations. AI can do that in an hour.
I’ve been thinking a lot about not just AI, but all sorts of technological advances in auditing because the audit firms seem to want to embrace these things, but are very slow to actually pull the trigger. One silver lining of the COVID-19 pandemic might be that it gives firms the push where they have to do it and they can’t drag their feet anymore. Or, they might feel like their regulators will be more sympathetic when they make changes, because a big fear for firms is that they’ll do something in a new way and a regulator won’t like it. So, maybe this will ease that fear, or change that cost-benefit calculation.
Jirs Meuris, assistant professor in the Department of Management and Human Resources
AI has already changed HR. One area is analytics. I think it’s being integrated more in trying to get better predictive models in many scenarios—just fill in the blank. Machine learning is the natural progression; employees will not be so much replaced as they will just start having a different function. Low-skilled labor—people who are managing the machine and then the machine is doing a lot of the work—in a sense gets replaced by high skills.
Management by AI is another area of change. If you have employees and you have a computer collect all this data and use it to make employee decisions, or at the very least, give employees feedback, you’re basically managing by computer. The computer is saying, “You need to do this or you need to do that.”
The aspect everyone is worried about is AI taking jobs from people. In some cases this is happening, but in other cases, it’s going to take a lot longer, like with the trucking industry. It will be a long time before you see a truck without anyone sitting in it. Bear in mind, the driverless cars all have a driver and sometimes a passenger, and most of them can only go straight, maybe make some turns, but without too much traffic on the road—I think that’s the level they are at right now. But even if the technology could handle completely driverless, could you imagine insurance for something like that? Even airline pilots have a second person to take over in case something goes wrong.
I would not necessarily be too worried about job takeovers happening because AI has some legal liability questions that would need to be solved. Once you start having work being completed by a computer, not a person, who is really liable? Say there is a coding mistake that leads to something very serious, how do you deal with that? There are so many questions around AI that I don’t know if we can even grasp yet the extent to which things will really change.