“Crisis” may seem a dramatic way to describe the current state of natural language understanding given the impressive demonstrations of large language models. But there is an inflection point here, or coming soon, where progress stalls. Last week’s recommended article by Walid Saba argued that current ML approaches are flawed.
This week, John Ball makes the case for combining linguistics, neuroscience, and computer science for an approach to NLU that is closer to how humans learn, without “reading thousands of books and memorising colocation patterns and probabilities”. He provides an example to explain how this works (with a link to a short video that is helpful after reading the article). This is not a silver bullet since there is domain expertise, effort, and their cost in application, but it does promise better NLU results.
Saba and Ball both have companies in stealth mode but regularly publish articles on their approaches with supporting research for further evaluation.