Denny Zhou
5 papers ยท Latest:
Gemini: A Family of Highly Capable Multimodal Models
Gemini is a new family of multimodal AI models excelling in image, audio, video, and text understanding, achieving state-of-the-art results across numerous benchmarks including human-expert level on MMLU.
Transcending Scaling Laws with 0.1% Extra Compute
UL2R fine-tuning significantly improves large language model performance and scaling efficiency with only 0.1% extra compute, enabling substantial computational savings and emergent abilities.
PaLM: Scaling Language Modeling with Pathways
PaLM is a 540-billion parameter Transformer language model that achieves state-of-the-art few-shot learning performance across diverse benchmarks, demonstrating significant benefits from scaling.
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-consistency is a new decoding strategy that improves chain-of-thought reasoning in language models by sampling diverse reasoning paths and selecting the most consistent answer.
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain of thought prompting, which involves providing intermediate reasoning steps in prompts, significantly enhances large language models' performance on complex reasoning tasks.
๐ฌ Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week โ summarized, scored, and delivered to your inbox every Monday.