AI researchers launch SuperGLUE, a rigorous benchmark for language understanding

News Feed

Facebook AI Research, together with Google’s DeepMind, University of Washington, and New York University, today introduced SuperGLUE, a series of benchmark tasks to measure the performance of modern, high performance language-understanding AI.

SuperGLUE was made on the premise that deep learning models for conversational AI have “hit a ceiling” and need greater challenges. It uses Google’s BERT as a model performance baseline. Considered state of the art in many regards in 2018, BERT’s performance has been surpassed by a number of models this year such as Microsoft’s MT-DNN, Google’s XLNet, and Facebook’s RoBERTa, all of which were are based in part on BERT and achieve performance above a human baseline average.

SuperGLUE is preceded by the General Language Understanding Evaluation (GLUE) benchmark for language understanding in April 2018 by researchers from NYU, University of Washington, and DeepMind. SuperGLUE is designed to be more complicated than GLUE tasks, and to encourage

 


This article was originally published on on VentureBeat.

Click here to read the rest of the article.

About

Leave a Comment

Start typing and press Enter to search