Healthcare Reimagined with Canary Speech and AI

Want to listen instead?

We recently welcomed Henry O’Connell, CEO and co-founder of Canary Speech, to our Between the Bytes podcast. He and Jeff Adams started Canary Speech, a speech biomarker company, 6 years ago. Their innovative technology uses speech to help identify diseases. The concept has been around for about 20 years, and some prominent universities have worked on it – including MIT, Carnegie Mellon, Cambridge University, Arizona State, and Florida State, to name just a few.

“When we looked at the [existing] work and the lack of practical solutions in health care, Jeff and I thought, ‘there’s an opportunity to do some good here,’” O’Connell explains.

The duo was well suited to do so since Adams has created and commercialized natural language processing tools that have shaped our culture. He led the team that built the Amazon Echo.

Adams and O’Connell realized that while medical professionals were using language processing to input medical data in the correct places in the correct forms, there were no practical solutions for taking the technology a step further.

“Jeff and I looked at the space and looked at the development of technology and said, ‘if this could be done, the implications to health care could be profound. The improvement in quality of life and quality of care could be significant,’ so we decided we ought to do it,” he says.

How Does Canary Speech Use AI?

While artificial intelligence can mean different things, O’Connell characterizes it as the analysis of large data systems through machine learning in close to or in real time to enable decision-making.

This is particularly useful when a physician follows a patient’s progress over time. He uses the example of a surgeon talking with a cardiac patient post-discharge: “I spoke with them last week. I’m with them again this week. I want to know, are they progressing positively toward recovery? While talking to them, I’m processing their vocal elements, the biomarkers in their speech, the sounds of their lungs, and the patterns of how they’re talking to me. If I find those sounds indicate a pulmonary tract infection, I would want an immediate alert because such an infection is detrimental to recovery and can trigger a secondary heart problem.”

AI is intensely focused on that individual patient’s data.

“We’re processing 12 and a half million data points a minute. We’re doing that in milliseconds, and we’re giving a clinical person information that can positively drive the outcome,” he notes.

Machine Learning as it Relates to AI

How do machine learning and AI relate in this case? The two work in concert. O’Connell likens machine learning to human neurons. We capture a piece of information, and our neurons process it, which becomes a thought. In some ways, that’s a data-driven analysis happening in our very human brains. If you think of artificial intelligence as an outcome, he explains, AI is backed up by the brain of machine learning, which is processing data and information like your mind. It’s not yet as complex, although science continues to push that boundary forward.

AI backed by machine learning excels in the cardiac patient example we used before because the medical professional in question may speak with patient A on Monday and then talk with 250 other patients before seeing patient A again several days later. Their mind is cluttered with those other cases, so they might not catch every nuance in patient A’s pulmonary sounds. The machine learning system isn’t cluttered. It’s solely focused on comparing patient A’s voice over time and doing a delta on the specific elements that relate to pulmonary sounds. That data augments the observation the clinical person is making and gives them actionable objective information.

Will Calculators Take Over the World?

While these applications have the potential for so much good, there is also some fear surrounding AI. There was a story about a Google engineer who claimed the Chatbot he was working on was alive and conscious of him. From that recent news story to classic sci-fi tales, the question we ponder is: If AI is really like our brains, could it become conscious and start thinking on its own?

O’Connell says, “No, it’s just an advanced input/output system. When I was in college, handheld calculators didn’t exist for the first two years, and we were still using slide rules. When they came into play, we did not think of calculators as artificial intelligence, although you could plug things into them and get numbers out faster. They were not intelligent beings or some type of sentient creature. They are tools that we use to augment the information we have for our decision making.”

While today’s deep neural nets and machine learning capabilities dwarf the calculators of yesteryear, they function fundamentally in the same way – they’re processing information.

“They have a range of different algorithmic determinants that branch down and say, this one’s more likely, then this one’s more likely, all the way down your branch. In language, it identifies what you’ve asked it to, but you’ve given it basic information to say; I just said the word apple, or I said the word apple in a more emotional way than I would have otherwise. So maybe I’m talking about the company and not the fruit. Or maybe, in fact, I’m talking about the fruit because I’m an apple lover. So, we can specifically train it to recognize changes in emotion.”

While the nuances are getting more sophisticated, AI is still data processing at its core, making it a really fancy calculator.

Enhancing Human Skills

One of the best use cases for AI is as a supplement to what humans are already doing. Some people are worried about AI replacing their jobs – like copywriters agonizing over Chat GPT. In reality, AI makes things that are repetitive easier.

O’Connell goes back to another example from his college years.

“I had to take a computer course in which you still had punch cards. You had to type something into a terminal, and it would create a punch card. One of my favorite poets was Robert Frost. I created a database of Robert Frost poems, which you had to type in…[it would] then randomly search that for patterns or words, adjectives, pronouns, verbs, whatever, and then create a random new poem from Robert Frost. I can assure you that the new poem created by the computer based on Robert Frost’s patterns was crap. It had the same meter. It had the same adjectives. It used the same database of language that he used, but it wasn’t him.”

O’Connell notes that he input about half a dozen poems for that initial experiment. If he had added 50 instead, the database from which the system could draw would have been more diverse, but the results would have been similar.

“Creativity is not diversity with respect to the subject matter we’re discussing. Creativity comes from our mind’s ability and the capacity to do something that hasn’t been done before. Creating diversity from an existing database creates a variation of what has been done before. Creativity creates something that has never been done before. Will artificial intelligence get to that point? And the possibility is, of course, yes. There’s a vast difference between the diversity from the existing database to the creation of something that has never existed before. We, as human beings, do the latter, and AI today is still doing the former.”

Leadership Lessons from AI and Garden Shovels

With all the changes O’Connell has seen, we asked about his thoughts on effective leadership in the technology sector. Just as machine learning in A.I. takes many data points and synthesizes them into a final decision, in leadership, you need to take multiple inputs and effectively synthesize them into an action item or decision.

“I’m a biochemist; I’m not an expert in this field. I don’t have a Ph.D. in machine learning. Leaders, and particularly leaders in this field, whether they have that degree or not and that expertise or not, have got to be listening to the people who are doing it,” he notes. “We work with complex algorithms right now. That needs to be a component of an algorithm
in the decision-making process for the company. The input of the people in the company that happen to be doing it in the trenches has to be the biggest component of the decision. And as a leader, you need to orchestrate the capacity of those elements to come together, as we do with algorithms to come together for how you drive decision-making in the company.”

“We have a range of patents in our company, and our patent lawyers and scientists contribute significantly to that. Our lawyers have Ph.D.s in machine learning as well as experience in the field. So, I come in as an individual with potentially the capacity to think out of the box but not with the expertise that they have. So, decision-making in that space is driven by a range of experiences and the reality that they have experience on a day-to-day basis that I just don’t simply have. So good leadership in companies using AI has got to be driven by that kind of collaborative decision making based on the experience and expertise … of the people n the trenches.”

He cites the example of the gigantic heads on Easter Island. For decades, scientists have speculated about how these sculptures could have been moved from 100 miles away and tethered in the ground. The academic speculation on the subject was turned on its proverbial head a few years ago thanks to a small garden shovel. Someone was digging and discovered that the heads had torsos that were still underground. All of those papers by brilliant academics were based on info that could have been debunked by a gardener with a cheap plastic shovel.

“In leadership, it’s important that we’re willing to listen to the voices around us, and those become an actual part of the decision process,” O’Connell notes.

These were selected excerpts from the interview that ranged from cybersecurity issues to the proper way to order a breakfast sandwich at McDonald’s and more on what archaeology can teach us about leadership. Listen to the podcast for all the best bytes!

Related Insights

Search