Salesforce is out front this year in an area that has, historically, lagged -- natural language processing, a longtime tech deficiency in its platform. With Siri, Alexa, Google Assistant and other personal assistants, the world is acclimating to the technology's late arrival, but the business world requires much more accuracy and utility from NLP to create bottom-line benefits.
Salesforce calls its flavor of NLP decaNLP, and it has been making waves since summer 2018. The idea is simple: Any given NLP app requires a model, and different apps are typically built on different models. The Salesforce innovation seeks to provide an all-in-one model, a Swiss Army knife, upon which any number of apps can be built. One model, many apps -- it's a great idea.
Per a Salesforce NLP Decathlon white paper, the 10 tasks the model will be able to perform include question answering, machine translation, summarization, natural language inference, sentiment analysis, semantic role labeling, relation extraction, goal-oriented dialogue, semantic parsing and pronoun resolution.
That's quite a list. To its credit, Salesforce is boosting the credibility behind the model by open sourcing all supporting literature and throwing the challenge down to its development community: Build apps that mix and match tasks on the same model -- for example, a chatbot that also does sentiment analysis and pronoun resolution, while engaging in goal-oriented dialogue.
How Salesforce decaNLP works
Salesforce offers technical details to placate skeptics with substantive details about this bold claim. This is useful, because it enables us to offer critical analysis of the paradigm itself, not just what it may or may not do in application.
To begin, there's some interesting under-the-hood functionality: DecaNLP derives its utility from what it calls a multitask question answering network (MQAN) that learns adaptively and generalizes its learning to new problems. Like the human brain, it takes in semantic information and applies it in different ways.
The decaNLP approach is interesting in a couple of ways. First, it makes for a fluid architecture that should truly have the versatility Salesforce claims. Each task begins with a set of facts within which questions that apply to the task will be processed (context). Then, the context and questions are broken down into their component parts (encoded) and aligned (dynamically mapped) for processing the construction of a response.
Second, there is considerable economy and scalability in the mapping scheme, which delivers new alignments with each question -- ideally, learning as it goes.
The result, per the decaNLP literature, is that the MQAN can understand that the format of its response will be based on the format of the question being asked.
Very excited to announce the natural language decathlon benchmark and the first single joint deep learning model to do well on ten different nlp tasks including question answering, translation, summarization, sentiment analysis, ++https://t.co/R5wbnAQcC3 pic.twitter.com/4fotVhdRow— Richard (@RichardSocher) June 20, 2018
Richard Socher, chief scientist at Salesforce, tweets on a Natural Language Decathlon benchmark.
How decaNLP might not work
The bottleneck in all of this is the context and the necessity of configuring it around the problem the app is solving in order to reduce ambiguity.
Just as no human knows everything about anything, any body of facts associated with a task will have some arbitrary level of incompleteness -- so, the more facts, the better. But the larger the set of facts, the denser the subsequent mappings will be. When tasks are mixed and matched (and aren't similar in kind -- as, for instance, summarization and goal-oriented dialogue), the usage of a particular fact can blur, tarnishing the relevance of a result or introducing ambiguity.
This sort of thing can be fixed in two ways: through exhaustive training on the data set containing the context or through painstaking configuration. And both of those require a great deal of front-end effort and expertise, which kind of defeats the purpose of NLP that's easy to use and highly flexible.
The most realistic expectation of decaNLP
If all this proves true, then we can expect decaNLP to truly shine in some tasks -- look for Salesforce Einstein bots, coming soon. But it will also be shaky with others -- summarization, natural language inference -- if the scope of the NLP app isn't kept narrow from the outset or if disparate tasks live side by side in the app.
Is this quibbling? Probably. Technology this ambitious isn't going to be anywhere near perfect out of the gate, and the truth is: We still don't know what we really want from it.
But one thing is for certain: The genie isn't going back into the bottle. The only path forward is straight ahead, and Salesforce NLP is engaging us in the most effective way possible -- by dispensing AI technology that challenges us.