The Giant Otter Conversation Authoring Platform
All the tools you need to go from data to bot
After you import a small set of example conversations, our platform guides you through the process of extracting intents, entities, and dialogs for a bot directly from your data.
The Discovery process is driven by human-in-the-loop machine learning. Built-in algorithms analyze your data and generate hundreds of micro-tasks to explain what specific phrases and exchanges mean in the context of your conversation. As you complete the tasks, you produce semantic information that helps the system to construct a model of your conversation.
You can complete the Discovery tasks yourself, or contact us about our enterprise services to accelerate model development.
The output of the Discovery phase is a branching interactive model, learned bottom-up from your conversation examples. The Shaping phase enables editing that model to match the scope and the voice of the bot you want to deploy to your customers.
Our platform makes that process easy with intuitive visualizations and efficient editing tools.
Your model is presented in an integrated way so that all of the building blocks (intents, dialogs, slots, etc., and the examples of each) are connected. You can explore the pieces individually, but also see how each ties to the overall map of the conversation.
Toggle to include or exclude individual intents or dialogs, or filter based on frequency. And for the bot copy, select the best wording seen in agent responses extracted from the examples -- or write your own -- then apply that choice across all flows in one click.
When you've finished Shaping, export conversations to your bot platform of choice. We support major players, like Amazon Lex, IBM Watson Conversation, Microsoft LUIS, Google DialogFlow, and Facebook’s wit.ai.
Or, deploy with Giant Otter's own bot API, which is integrated with a variety of messaging platforms and optimized for managing extended conversations.
Testing and ongoing improvement are critical to any system, but they are especially challenging with chatbots. Simulating realistic, unbiased user behavior for tests is difficult. And because machines lack context and common sense, you need humans to evaluate the tests. Not only that -- you also have to interpret the results and make productive changes to your models.
Our platform includes specialized tools for collecting, scoring and learning from bot sessions - whether they are pre-launch or live.
In addition, Giant Otter offers premium enterprise services for initial testing and for human-in-the-loop session scoring and ongoing improvement.