As Humans we understand the dangers of making a decision based on an incomplete set of information.  A limited set of data points can quickly send us down the wrong (and very biased) path.  And still, most of the data that makes its way into algorithms is likely to be lacking context.

As a result the algorithm (and of course its creators, let’s not blame the machine!) uses data without context which is similar to driving blind, at night, at 90 mph, and hoping to find the right freeway exit.  Data without context is only noise.

Let me give you 2 examples of why we should always strive to add context into our algorithms.
 

A great actor can fool an algorithm

Let’s take the following sentence:

I am leaving the house because my son and my brother had a huge fight. I don’t know whose fault it is. I’m really afraid that he is going to hurt him.

Algorithms are no different than humans: We take this sentence and process it, based on our biases and assumptions. And we make a decision based on the limited information we have. For an algorithm, it’s likely it will proceed from text to analysis in one stop and return the likelihood someone is feeling a certain set of emotions based simply on the text collected.

In a sense this approach is very similar guessing.  In all fairness most of us would read this sentence and deduct the person who said it is under serious duress and is experiencing a high level of anxiety.

But let’s contrast this analysis with another way of looking at the data.  Meet one of our actors as she acts a scene in two opposite ways using the same script. Same content but very contrasting emotions.  We often work with actors to test our algorithms because talented actors can quickly cover a wide range of emotions in a very natural way.

 

Can you see how much the facial expressions, tone of voice, and overall attitude make a difference in our understanding of the original message? Already based on this information we would change our perception of how the person is doing and potentially how we would help them cope.

This perception would be further altered based on how past understanding of the person or external circumstances we know of.

Look, we’ve all heard at some point that communication is 93% non-verbal. This is true. And if we want to reduce bias in our algorithm, we need to take these elements into account. The fewer elements we take into account, the more likely we are to have inaccurate assessments and evaluations.

 

What does this heart rate value mean?

Let me give you another example.  Pretend for a moment that you are reviewing heart rates and see a reading of 171 coming from my account.  What would be your reaction as to what is happening to me given this data point?  If you simply go with the number to derive an answer it’d be akin to asking Dr. Google the question.  In this case, here is what the search result for “Heart rate 171” is:

Tachycardia is a heart rate higher than 100 beats per minute. A normal resting heart rate is 60 to 100 beats per minute. Ventricular tachycardia starts in the heart’s lower chambers. Most patients who have ventricular tachycardia have a heart rate that is 170 beats per minute or more.

One data point. One conclusion.  171 would earn me a trip to the ER with this approach.  However, when putting some context around the data the entire analysis can change.  When my heart rate reaches 171 it’s because I am running intervals and trying to catch my breath. 
Obviously, knowing this context when reviewing my data would tremendously alter your entire decision set. 

Understanding context and contrasting information is very important in many industries:

  • The Health Industry relies on mental assessments to identify and assist people who are in need. Unfortunately, using the current methods, up to 66% of such diagnoses are incorrect. Misdiagnoses have huge financial costs. They also take a toll on patients’ health, and sometimes, their lives.
  • In high-risks activities (first responders, military…) someone will be pulled out of the line of duty based on their mental health evaluations.  Not only does this contribute to stigma but also has huge implications when someone is wrongly diagnosed.  Lives can be altered by an unreliable questionnaire.

Humans deserve better

We believe we -humans- deserve a better option.  A wrong assessment, based on insufficient contextual data, should not be “enough”. This is why we are focusing on building an API that can be smart enough to not just recognize a person’s basic data point but also the circumstances behind the data.

To learn more about Okaya and how our AI is revolutionizing mental health assessments, schedule a demo today.