At Google, we’ve always had a soft spot for language. We began by attempting to translate the web. Recently, we developed machine learning techniques that enable us to gain a better understanding of the intent behind Search queries. Our advancements in these and other fields have made it increasingly simple to organise and access the mountains of information conveyed by the written and spoken word over time.

However, there is always room for advancement. Language is an astonishingly nuanced and adaptable medium of communication. It can be literal or metaphorical, flowery or straightforward, inventive or informative. Language’s versatility makes it one of humanity’s greatest tools — and one of the most perplexing problems in computer science.

LaMDA, our most recent research breakthrough, adds pieces to one of the most exciting puzzle pieces: Conversation.

 

The Long Road to LaMDA

LaMDA’s conversational abilities have been developed over years. As is the case with many recent language models, including BERT and GPT-3, it is based on Transformer, a neural network architecture invented and open-sourced by Google Research in 2017. This architecture results in a model that can be trained to read a large number of words (a sentence or paragraph, for example), pay attention to how those words relate to one another, and then predict the next words.

However, in contrast to the majority of other language models, LaMDA was trained on dialogue. It picked up on several of the nuances that differentiate open-ended discussion from other forms of language during its training. Sensitivity is one of those complexities. Basically: Does the response to a given conversational context make sense?

LaMDA builds on prior Google research published in 2020, which demonstrated that Transformer-based language models trained on dialogue can learn to converse about virtually anything. Additionally, we’ve discovered that, once trained, LaMDA’s responses can be fine-tuned to significantly increase their sensibleness and specificity.

 

 

Responsibility First

While these preliminary results are encouraging, and we look forward to sharing more in the near future, reasonableness and specificity are not the only characteristics we seek in models such as LaMDA. Additionally, we are investigating dimensions such as “interestingness” by evaluating whether responses are insightful, unexpected, or witty. As Google, we are also concerned with factuality (whether LaMDA sticks to the facts, which language models frequently struggle with), and are investigating ways to ensure that LaMDA’s responses are not only compelling but also accurate.

However, the most critical question we ask about our technologies is whether they stay true to our AI Principles. While language is one of humanity’s most powerful tools, it, like all tools, can be misused. Language-learning models have the potential to propagate this misuse — for example, by learning to understand biases, replicating hateful speech, or replicating misleading information. And even when the language on which the model is trained is thoroughly vetted, the model itself can be misused.

When developing technologies such as LaMDA, our first priority is to minimise such risks. We are intimately familiar with the issues surrounding machine learning models, such as unfair trial, having spent years researching and developing these technologies. That is why we develop and make available resources for researchers to analyse models and the data used to train them; why we have scrutinised LaMDA at every stage of its development; and why we will continue to do so as we work to incorporate conversational capabilities into more of our products.