Most products only use Machine Learning (ML) for natural language processing. The weakness of only using machine learning to train bots is that it takes a lot of data.
With ML you must provide a collection of sentences that match a chatbot’s intended goal (and eventually a collection of sentences that do not). In this instance the bot itself does not inherently understand an input sentence. Instead, it tries to measure how similar the data input is to what it already knows.
Example: You set an intended goal for a task, let’s say “create a lead.” You then give the bot a training sentence of “create a lead.” It’s a one-to-one match of goal-to-input, which is a fantastic result in theory. But if that’s all the input training you give the bot, that’s all it will know. Inputs like “make a lead” would fail and likelihood of a user matching an input exactly with the bot’s trained goal is low.
Only after presenting the bot with a range of additional options, and some that are known to be incorrect to accommodate more requests, would traditional ML adjust its detection. For every synonym of your task verb, you’d have to give it a sample sentence using that synonym. As you can imagine, the process is tedious and makes the user experience with a bot likely to be confusing and cumbersome.
An ML only approach can also be inaccurate because it requires extensive training of a bot for high success rates. Our prescription combines fundamental meaning (FM) with ML to make it easier to build NL-capable chatbots out of the gate – whether or not rich training data is available. We also use ML to further train the chatbot over time.
Together, enterprise developers can solve for real-world dynamics and gain the inherent benefits of both approaches, while eliminating the shortcomings each has on its own.