Researchers propose bias fix for GPT-3 and other language models

Data: Meet ad creative

Register Now

Few-shot learning, or the ability to learn tasks from a few examples, is a key aspect of human intelligence. Large AI natural language models like OpenAI’s GPT-3 can perform few-shot learning without fine-tuning. But despite the promise of few-shot learning, new research finds that the accuracy of language models — particularly GPT-3 — can be “highly unstable” absent calibration.

The research, which was coauthored by scientists at UC Berkeley, UC Irvine, and the University of Maryland, is the latest to find flaws in GPT-3 and other models like it. OpenAI itself notes that GPT-3 places words like ” naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” A paper by Stanford University Ph.D. candidate

Leave a Comment

Start typing and press Enter to search