Xuezhi Wang
5 papers ยท Latest:
Gemini: A Family of Highly Capable Multimodal Models
Gemini is a new family of multimodal AI models excelling in image, audio, video, and text understanding, achieving state-of-the-art results across numerous benchmarks including human-expert level on MMLU.
Large Language Models Can Self-Improve
This paper shows that large language models can improve their reasoning abilities by fine-tuning on their own high-confidence, self-generated answers without any labeled data.
PaLM: Scaling Language Modeling with Pathways
PaLM is a 540-billion parameter Transformer language model that achieves state-of-the-art few-shot learning performance across diverse benchmarks, demonstrating significant benefits from scaling.
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-consistency is a new decoding strategy that improves chain-of-thought reasoning in language models by sampling diverse reasoning paths and selecting the most consistent answer.
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain of thought prompting, which involves providing intermediate reasoning steps in prompts, significantly enhances large language models' performance on complex reasoning tasks.
๐ฌ Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week โ summarized, scored, and delivered to your inbox every Monday.