digest | AI tool detects Alzheimer’s disease with 95% accuracy

Software finds tiny differences in the way people use language.
September 1, 2020


image | above

Researchers have successfully trained an AI software program to detect the presence of Alzheimer’s disease in people — by looking at the way they talk. The tool relies on the fact that patients with Alzheimer’s tend to use English language differently than healthy people.


— contents —

~ story
~ by definition
~ pages
~ reading


— story —

Researchers from Stevens Institute of Technology designed an artificial intelligence software tool that can diagnose Alzheimer’s with 95% accuracy, reducing expensive diagnostic scans or in-person testing.

The software program is also able to document + explain its conclusions, so human experts can check the accuracy of its diagnosis.

The tell-tale signs

Some tell-tale language signs the AI software can detect:

  • Alzheimer’s disease can affect a person’s use of language
  • people with Alzheimer’s disease tend to replace nouns with pronouns
  • for example — saying “he sat on it”
  • … instead of — “the boy sat on the chair”
  • they tend to speak with longer, awkward phrasing
  • for example — saying “my stomach feels bad because I haven’t eaten”
  • … instead of — “I’m hungry”
  • they often have trouble expressing themselves

The project was developed by K.P. Subbalakshmi PhD. She’s the founding director of the Stevens Institute of Artificial Intelligence — and a professor of electrical + computer engineering. She said:

This is a real breakthrough. We’re opening an exciting new field of research, and making it easier to explain to patients why the AI algorithm came to the conclusion that it did — while diagnosing patients. This is absolutely state-of-the-art. Our AI software is the most accurate diagnostic tool currently available. This increases our ability to trust an AI system with important medical diagnosis.

— K.P. Subbalakshmi PhD

Alzheimer’s disease can affect a person’s use of language. By using AI software that learns over time — called a “convolutional neural network” — Subbalakshmi and her students developed a tool that accurately identifies well-known, tell-tale signs of Alzheimer’s — by detecting subtle language patterns that could easily be overlooked.

Tracking human language

Subbalakshmi and her team trained their algorithm using text produced by both healthy subjects and known Alzheimer’s patients — as they described a drawing of children stealing cookies from a jar. Using tools developed by Google, Subbalakshmi and her team converted each sentence into a unique number sequence — called a vector — representing a specific point in a 512-dimensional space.

With this approach, complex sentences can be assigned a concrete number value. This makes it easier to analyze structural + thematic relationships between sentences. By using those vectors along with hand-crafted features identified by subject matter experts — the AI software system gradually learned to spot similarities + differences between sentences spoken by healthy or unhealthy subjects. It can determine — with remarkable accuracy — the probability that a sample of speech belongs to an Alzheimer’s patient.

Also, the software can also easily incorporate new Alzheimer’s detection criteria that’s identified by other research teams in the future. So it will become more accurate over time.

The algorithm itself is incredibly powerful, we’re only constrained by the data available to us. We designed our system to be both modular and transparent. If other researchers identify new markers of Alzheimer’s, we can simply plug those into our architecture to generate even better results.

This method can be used to detect other medical conditions. When we get more + better data, we’ll be able to create streamlined, accurate AI software tools to diagnosis many other illnesses too.

— K.P. Subbalakshmi PhD

Robust diagnostic ability in the future

The next step is to train the AI software on a much bigger volume of sample text. In the near future, AI software could diagnose Alzheimer’s based on any sample of text — from a personal e-mail, to a social-media post. To accomplish that goal, an algorithm needs to be trained on a large volume of sample texts — of different types — spoken or written by diagnosed Alzheimer’s patients. With a larger sample set containing the tell-tale language markers of Alzheimer’s disease, the software can become better familiar with what to look for.

Subbalakshmi is programming her software to diagnose patients using other languages. Her team is also exploring ways that other neurological medical conditions — like aphasia, stroke, traumatic brain injury, and depression — can affect a patient’s use of language.



by definition | what is explainable artificial intelligence?

the Enterprisers Project | home
the Enterprisers Project | What is explainable AI?

— excerpt —

Explainable AI (artificial intelligence) means humans can understand the path a software system took to make a decision. Let’s break-down this concept in plain English, and explore why it matters.

AI software — that uses computational techniques like machine learning / deep learning — takes inputs and then produces outputs (or makes decisions) with no decipherable explanation or context. The system makes a decision or takes some action, and we don’t necessarily know why or how it decided. The system just does it, based on instructions the original programmer coded into the software program — that’s invisible to the user.

That’s called the “black box” model of AI, and it’s mysterious. In some cases, that’s just fine. In other contexts, it’s plenty ominous. For small programs like AI chatbots or sentiment analysis of social feeds, it doesn’t really matter if the AI system operates in a black box. But for software programs with a big human impact — autonomous vehicles, aerial navigation + drones, military applications — being able to understand the AI software’s decision-making process is mission-critical.

Enter “explainable AI” — sometimes known as “interpretable AI” or by the acronym XAI. As the name suggests, it’s AI that can be explained and understood by humans.

read | full definition


on the web | pages

Stevens Institute of Technology | home
Stevens Institute of Technology | YouTube

tag line: The innovation university.


on the web | pages

K.P. Subbalakshmi PhD | home


reading


group: by Stevens Institute of Technology
tag line: The innovation university.

story title: AI tool promises faster, more accurate Alzheimer’s diagnosis
read | story

 — summary —

Stevens Institute of Technology team uses explainable AI to address trustability of AI systems in the medical field.


group: by Xtelligent Healthcare Media
tag line: Using data to deliver relevant content to our readers.

publication: Health IT Analytics
story title: AI tool diagnoses Alzheimer’s with 95% accuracy
read | story

— summary —

The algorithm can detect subtle differences in the way people with Alzheimer’s disease use language.


— notes —

AI = artificial intelligence
XAI = explainable artificial intelligence
IT = information tech