Home → Opinion → Interviews → Self-Supervised Learning and Large Language Models → Full Text

Self-Supervised Learning and Large Language Models

By The Gradient

November 16, 2021

[article image]


In an interview, Stanford PhD candidate Alex Tamkin discusses his research, which focuses on understanding, building, and controlling pre-trained models, especially in domain-general or multimodal settings.

Interview topics include viewmaker networks, opportunities and risks of foundation models, impacts of large language models, research culture, scientific communication, and more.

From The Gradient
View Full Article

0 Comments

No entries found