- podcast
- NEWS
Hybrid intelligence: Working with AI instead of against it
Future-focused consultant and entrepreneur Scott Klososky admits he doesn’t know exactly what the future holds as it relates to artificial intelligence. But he is definitely more upbeat about the productivity possibilities AI offers than down about the jobs AI might eliminate.
He believes AI will cut out parts of jobs, not entire ones. Klososky will talk more on this topic in September at the AICPA & CIMA Conference on Banks and Savings Institutions.
About AI, Klososky says: “Instead of fearing it, we need to get better at embracing it and loving and appreciating what it can do.”
What you’ll learn from this episode:
- How the past 10 months have transformed the adoption of generative AI such as ChatGPT.
- What Klososky means by the phrase “consumerized AI” – and how what’s happening now compares to the launch of the iPhone.
- Why Klososky prefers “machine intelligence” over “artificial intelligence.”
- How technology could play a role in helping to close the U.S. labor gap.
- A new take on the 80-20 rule, applicable to AI.
- What Klososky is looking forward to from this year’s banking conference.
Play the episode below or read the edited transcript:
To comment on this episode or to suggest an idea for another episode, contact Neil Amato at Neil.Amato@aicpa-cima.com.
Transcript
Neil Amato: Scott Klososky is a technology entrepreneur, founder, and author. He’s the keynote speaker at the AICPA & CIMA Conference on Banks and Savings Institutions in September. And he’s joining me, the Journal of Accountancy’s Neil Amato, on the next episode of the JofA podcast. You’ll hear our conversation right after this word from our sponsor.
Amato: I said in the intro, I’m being joined by Scott Klososky. Scott, welcome to the JofA podcast.
Scott Klososky: Thank you, Neil. Very glad to be here.
Amato: Now again, I said that you’re speaking — you give a keynote address at the AICPA & CIMA conference on Banks and Savings Institutions. In the summary for that address it says, “AI has been creeping into banking software for the past 10 years.” What I’d like to know is forget the last 10 years, how much has it changed in that realm in just about the past 10 months?
Klososky: Obviously, it blew up in November 2022 when OpenAI came out with DALL-E 2 and ChatGPT-3. Interesting that we should note that it was the third version. But basically it consumerized AI. So, you’re right, AI has been out for a while, but like we’ve seen before when the iPhone was announced and then it just consumerized mobile devices.
When a technology becomes consumerized and then millions of people use it, a couple of things happen. First of all, the awareness goes up. We get lots of articles written about it, which means people want to buy it and use it more but then also venture capital people start investing lots more money in it. We all look back now and say we’ve had AI for a while, but it hit its iPhone moment in November and the iPhone moment analogy is just we had Blackberries, we had Treos, we had pagers. When the iPhone was announced with the Apple Store combination, mobility exploded. It’s the same thing with AI, and we’ll probably see the same dynamic again in the future.
With something like quantum computing, where we’ve heard about it for years and years and then it will be used a little bit by big organizations, and then there’ll be the day, Neil, that you can go out and rent some quantum computing and you’ll say: Oh my gosh, it’s 1,000 times more powerful than a regular computing, and then we’ll be having the same conversation about quantum.
Amato: I was talking to some potential podcast guests last week, and it was on an AI topic, just a preview call, and one of them said, AI has been around for almost 60 years. Is it really that old? I didn’t know that.
Klososky: This is a definitions problem. The term even artificial intelligence, a lot of us don’t even like it because the word artificial doesn’t fit a lot of times with machine intelligence. When a machine does something that’s intelligent, we named it artificial intelligence, as if it is somehow fake. We have a lot of vocabulary problems right now, and there’s a big — soon to be legal — issue around the term AI because lawyers and government entities are trying to analyze it and trying to say, well, what is AI? What is it? When people say things like that to me, it’s been around for 60 years, I roll my eyes.
Because the term was invented 60 years ago, but that doesn’t mean that we had it. I think it was more of a concept back then. It would be the same thing as saying again now since we just were talking about quantum computing, if in 10 years you say, well, we’ve had quantum computing for 50 years. No, we haven’t. It was scientifically in a journal somewhere and then there were some early prototypes done. From a definition standpoint, I might argue, we had software that you could build algorithms into. In other words, you could build a few rules. A calculator. I heard somebody the other day wanted to argue that a calculator was artificial intelligence, because it could do math.
You see the vocabulary problem. Well, where are we going to draw the line on what is an algorithm versus what’s just intelligent software versus what is actually, I guess what we’re going to put under the banner of those two letters, of AI. I’m more comfortable saying we’ve had artificial intelligence for at least the last decade, where the algorithms were sophisticated enough that they were able to make decisions like humans would make. That’s my version of we’ve had AI for a while, but no, not 60 years.
Amato: That leads nicely into the next question, which was about that wording: machine intelligence. I’d like for you to tell me what’s the difference between machine intelligence and human intelligence, and then how can both be applied in business today?
Klososky: A great question. I like the term machine intelligence much more than artificial intelligence. I think that’s a much truer way to say what we’re talking about is machines. In other words, non-humans. I do a lot of work in the field of, what is the difference between machine intelligence and human intelligence? What is the difference between the human mind and what a machine mind might be, or what is the difference between human consciousness and a machine consciousness? Obviously, we could talk for hours about that.
Just to answer your question concisely, what I think listeners might want a way to look at this is machine intelligence is a flavor of intelligence. Human intelligence is a flavor of intelligence. Think about it as strawberry and chocolate. They’re just different flavors. The intelligences would be the ice cream. Humans have been very arrogant about believing that human intelligence is massively better than any other possible form of intelligence.
We have to let go of that arrogance, and it is better to accept inside of organizations that you have people and people have human intelligence. You now have machines with AIs that have machine intelligence, and just not make one better than the other. They’re just both two different types of intelligence. They both have their positives and negatives, and the most important thing to understand in an organization is what we ultimately want is hybrid intelligence. We want the combination of the machine and the human.
So, you know, this is accountancy. I just spent my last two days in California doing a think-tank for a big accounting firm, and half of the think-tank was around how is AI going to change the accounting space and change that accounting firm? A lot of the discussion is, if you’re in a tax practice, how much of the work that we currently do today with the human mind is going to be able to be done with a machine? But what nobody is saying is that there will be no more human mind.
What we’re all getting our hands around is what is the hybrid intelligence look like? What does it look like for Neil to have 10 or 15 AI assistants or tools that you use, so that to be able to do what you do is not 100% coming out of your mind now. It’s 80% your mind, and it’s 20% the machine intelligence that you’re using to do what you’re doing. Maybe in the future, it’s 50/50. That would be a long answer to that question, because that’s a deep question.
Amato: It is. It’s one of the things we specialize in on this podcast. The other thing we do on our podcast is we run transcripts, and I will say, without knocking our vendor at all, I’m ready for that AI to be better. I’m ready for those transcripts to be absolutely perfect, every single word, and me not still have to go through them, so I want to work with the AI, not be replaced by it.
That leads into the next topic, which is when people hear about AI being used in business or machine intelligence, I think one of the risks is how those organizations go about talking about it with their people because people hear AI and think, I’m going to get replaced. What is it in the change management and what other ways can businesses go about that, walk that line?
Klososky: It is a big question. I guess a handful of thoughts on it. First of all, I think about evolution. You get waves of technology that evolve industries and evolve the economy. This is a wave. It’s a wave like computing power was back in the 1950s and ’60s. It is a wave like the Internet was back in the late 1990s. This is a massive wave because the ability to create machine intelligence that can augment and replace human intelligence, it applies to everything, everywhere, so it’s a massive wave. When people say to me, Scott, but this wave is going to cost a lot of jobs. It’s going to take away a lot of jobs.
My answer to that is first of all, it’s not going to take away whole jobs. In most cases, it’s going to take away pieces of people’s jobs, and the pieces it’s going to take away are often the boring, highly repetitive, difficult pieces of the job. In your case, for instance, you said transcripts.
This is the fun part, Neil, you and I talking. This is the fun part. The not-fun part is when we hang up and then you have to do all the ditch digging to actually get the podcast produced and transcribed and so you want an AI to take that piece of your job. Because if it did, you would be able to spend more time just doing this kind of stuff. Talking to people, do more podcasts or more thinking about it or asking the great questions, designing the questions. I try to tell people that A, you need to understand that AI is going to take over parts of your job to free you to do the things that are more humanly beneficial, the more humanly fulfilling.
That’s good news. That’s not bad news. You shouldn’t fear that. That’s one thing. The other thing I say the leaders all the time right now is take generationally, give the Baby Boom generation leaving biggest generation we’ve had, you’ve got Generation X becoming the oldest in the workforce, smallest generation. You have the lowest birth rate in the US now than we’ve had for years, and we’ve cut down immigration. So add all that up. There’s no surprise to all of you who listen to economists that we have a 10 million-job job gap, and we’ve never had a job gap like this one.
Now, all the economists that I hear that are pretty savvy say the only thing that is going to solve this massive job gap that we have that’s only going to grow is AI, because it will take over doing some of the work people do, which will then allow us to be able to keep that job gap down a little bit. There’s many answers to your question. I’m just trying to give a sense of a few and sum it all up with — we don’t need to fear what AI is going to do. There will be way more benefits and opportunities than there will be negatives about what AI is going to do. I would say instead of fearing it, we need to get better at embracing it and loving and appreciating what it can do.
Amato: You’re the founder of Future Point of View LLC. I love that name. What made you say 20 years ago, I want to focus on what the future holds?
Klososky: Wow, great question. I don’t think anybody has asked me that before. My wife and I built the firm and we named it, and I remember sitting around thinking a lot about it with her when we decided to name it. We had named companies in the past, and I would say we’ve named them all wrong. We just named them wrong. Either they didn’t really end up fitting, we had to rename, or the name grew stale or whatever. When we sat down to do this one, I think we really thought about, for years and years, what is it we want this firm to do? One of the key pieces of DNA that we wanted in this firm is that we were better than most at accurately looking into the future and advising clients, so that it would be a blessing to those clients.
For instance, in this talk that I’m going to be giving that you mentioned, it’s to banks, credit unions, to financial institutions. In that hour that I speak with them, I want that hour to give them more ideas and a clearer picture of the future so that they can make good investments, so that they can prosper and that was something that 20 years ago, I think my wife and I said we don’t ever want to stop doing that in our careers. Let’s be really accurate at helping people see where the future’s going so they can make good investments, so that it would be a blessing to their organization, and so that’s how the name came up. It’s why we still like the name to this day, and every year we talk about living to this DNA here at this firm.
Amato: You mentioned the recent business trip to California, the CPA firm, so whether it’s CPA firms or other businesses, what would you say are the most common questions you’re getting related to applying AI to business?
Klososky: One is just what would a strategy framework look like? When clients say we want to use AI better than our competitors, so that’s something that they’ll say. Hey, we want to be able to be better than AI, better at AI than these three or four of our larger competitors.
Once they put that stake in the ground, the question that we get a lot is how? How do we develop a strategy? What do we need to invest in? What kind of skills are we going to need? Honestly, sometimes it’s how are we going to change our pricing because we’ve always billed hourly. Sometimes it’s how do we need to change our staffing?
These are the common questions we get on that business side. All right, this thing is going to leave a dent. It’s going to leave leave a dent, Neil. So help us understand what that dent looks like and help us understand how best to invest to ride this wave.
Amato: You mentioned the word dent. We’ve talked a lot about in general, high level, what some of the opportunities are around AI for businesses, but what are the risks that people are just now getting their brains around?
Klososky: Well, from a business standpoint, let’s get rid of one risk, this whole AI is going to extinct the human race. Let’s get rid of Terminator and Matrix and I, Robot and Ex Machina. I don’t believe a risk we need to be worrying about today is that it’s going to extinct the human race, take over the human race. Let’s get rid of that.
The way to think about AI risk is if you have an ERM program. You have an enterprise risk management program, AI risk needs to become a new piece of that program and within AI risk, there are multiple lanes. There is the regulatory controls that are coming down right now and so if you are using an AI that is proven to be discriminatory or biased, A, you could get sued. B, you might run afoul of regulators, especially banks and insurance companies are in this situation because there’s already regulations being passed in different states.
You have regulatory risk with AI. You have the legal risk, again of being sued because somebody sent an application in to get a job and your HR people didn’t even review it and the reason they didn’t review it was your applicant tracking system has an AI, and it didn’t push their [application] forward and so now they sue you and say, well, your AI is biased.
You have the risk of corrupted data going into an AI that is machine-learning and so it builds its own rules, but it builds the rules wrong. So you have that risk. You have the risk of people uploading intellectual property into AI engines and it becoming public and so that’s a risk today.
And then I would say probably the risk that is interesting is the risk of skill loss.You use an AI to do a task that humans use to do, and then you get down the road a few years, you have zero humans that know how to do that task anymore, because the AI has done that task. Neil, this is like you and calculators. If I said to you, you have to multiply four digits by four digits, do it on a piece of paper. Decent chance you can’t do that or I can’t do it and the reason is. …
Amato: I certainly haven’t done it in awhile. You’re right.
Klososky: I really think I can’t do it. I do not remember anymore how to do that. Well, that’s a skill loss. It’s OK, I suppose with a calculator. But you think about multiple AIs throughout an organization making decisions and then you get down the road five, 10 years, and there’s nobody at your firm or your company who knows the rules that the AI is using or what should be the rules, and so there’s a risk of skill loss. Anyway, the AI risk is not one thing. AI risk is a set of different categories that organizations need to manage and they need to get it into their ERM program.
Amato: Those are really good examples. In the show notes for this episode, again, we’ll share the link to the banking conference. Our listeners can find the agenda and other information. Scott, I know you’ve spoken to this group before, so this year, what do you look forward to at the banking conference?
Klososky: Well, last year, we didn’t have the explosion of AI. I talked a little bit about machine intelligence, but what I’m looking for this year or excited about this year is being able to launch from the things I talked about last year. Humalogy, high beam, some of those concepts I talked about last year and now to be able to paint a little bit clearer picture of, in a world that is becoming more powered by AI and data and automation, customer behavior changing, what is that going to look like for financial institutions?
Amato: Scott, this has been fun. I’ve learned a lot. Really appreciate you being on the podcast.
Klososky: Hey, I loved the questions. I would answer questions like that any day, Neil.
Amato: Well, thank you again.