Bot Testing & Optimization

Run tests & score session results with real humans

 

Testing and ongoing improvement are major hurdles for any chatbot program.

Simulating realistic, unbiased user behavior for tests is difficult. And because machines lack context and common sense, you need humans to evaluate the tests. On top of that, you have to interpret the results and make productive changes to your models. 

Giant Otter can assist with managed solutions that address any or all of these challenges -- from designing tests and leveraging our community to generate results before you launch a new conversation to scoring transcripts on an ongoing basis. We can reliably find and fill gaps in your bot to improve user experience.

And if you add Conversation Modeling services, we'll extract intents, entities, and dialogs from any new transcripts and seamlessly fold them into your model to sure it up.

if_031_server_performance_dashboard_hosting_laptop_speed_test_2090134.png

How it works

 
if_Draw_2290864.png

Set up project

Link to a test version of your bot or upload existing transcripts for scoring directly. Connect to our secure API for ongoing scoring and optimization.

Describe the task, set the specs, and identify any special project requirements. Then, launch it!

if_Settings_2290860.png

Generate Tasks

Giant Otter produces task instructions and sources workers to match the project requirements. Tasks are assigned workers to meet quality and speed objectives. Our algorithms check the results to ensure you're getting sufficient variety on tests, and you get an alert when your results are ready.

Explore Results

Review your scoring results in the Logs tab of the Conversation Authoring Platform. If you elected to add the scored transcripts to your model, new elements (intents, entities, dialogs, etc.) and examples will be highlighted in the Shape section.