Skip to main content

Content relevance

A short guide to the content relevance features in the speech assessment API


Typically in human-to-human interaction, the assessor is able to determine if the answers given by the student are relevant. Our content relevance features allow you to replicate this in a fully automated human-to-machine interaction.

Scripted content relevance

In a scripted context you know in advance what you expect the user to speak, you might be asking him to repeat a word, sentence, or paragraph of known text. In this case, the scripted content relevance score should answer the following question:

How close to the expected text was the user's actual speech ?

Our API will return a score from 0-100 which indicates how close to the expected text the predicted text was. 0 == completely irrelevant and 100 == exactly what was expected.

You can use this score in your application to determine if you want to:

  • Penalize the user for low relevance.
  • Give the user feedback about their low relevance.
  • Ask them to try again with a proper attempt.

Unscripted content relevance

In an unscripted context, we do not know in advance what the user will say as the question is open-ended. But we still want to check that the user's answer was relevant to the given question or context of the task we have given them. In this case, the unscripted content relevance should answer the following question:

How relevant to the question of task context was the user's answer ?

Our API will return a classification for the relevance, assigning one of the possible options: NOT_RELEVANT, RELEVANT. We also return some user-friendly text in content_relevance_feedback which explains why that relevance score was given.

This information is contained in the unscripted response metadata section:

 "metadata": {
"predicted_text": "I like apples.",
"content_relevance": "RELEVANT", // New, indicates if the content is relevant. Possible values are: NOT RELEVANT|PARTIALLY_RELEVANT|RELEVANT
"content_relevance_feedback": "The answer directly responds to the question.", //New, short feedback text describing why the content is or isn't relevant
}

You can use this information in your application to determine if you want to:

  • Penalize the user for low relevance.
  • Give the user feedback about their low relevance.
  • Ask them to try again with a proper attempt.

Providing context in your requests

For unscripted content relevance, you will need to add additional information in your API request to specify the expected context.

In your request body, you can pass in the context object, passing this new object with at least one of thequestion or context_description fields will turn the content relevance feature on. Without one of these fields, you will not receive a relevant report in your API response.

}
...Rest of request body
"context": {// New parameter
"question": "What fruit do you like ?",
"context_description": "The user should talk about fruit",
}
}

Both question and context_description are optional, you can pass in only one of them or both. In general, we recommend:

  • Providing question if you want to simply make sure that the answer is relevant to the question.
  • Providing context_description if you don’t have a specific question. e.g. a describe the picture exercise use case. Or if you want to provide even more context than just the question, but in general we find that the question context is sufficient.