You can now generate real-time speech that sounds conversational. Microsoft just open-sourced VibeVoice, a real-time text-to-speech system with ~300 ms first audio latency and streaming input. It handles long conversations without falling apart. 𝗧𝗵𝗶𝘀 𝗺𝗼𝗱𝗲𝗹 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝘀 𝗹𝗼𝗻𝗴, 𝗺𝘂𝗹𝘁𝗶-𝘀𝗽𝗲𝗮𝗸𝗲𝗿 𝘀𝗽𝗲𝗲𝗰𝗵. It produces up to 90 minutes of audio. It supports up to four distinct speakers. Turn-taking stays consistent over long sessions. 𝗜𝘁 𝘄𝗼𝗿𝗸𝘀 𝗯𝘆 𝗿𝗲𝗱𝘂𝗰𝗶𝗻𝗴 𝘁𝗶𝗺𝗲 𝗿𝗲𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻. Audio compresses into semantic and acoustic tokens. They run at 7.5 Hz instead of frame-level audio. A language model predicts structure. A diffusion head restores acoustic detail. 𝗜𝘁 𝗮𝗹𝗹𝗼𝘄𝘀 𝗹𝗼𝘄-𝗹𝗮𝘁𝗲𝗻𝗰𝘆 𝘀𝘁𝗿𝗲𝗮𝗺𝗶𝗻𝗴 𝗮𝘂𝗱𝗶𝗼. The real-time variant streams text incrementally. First speech arrives in ~300 ms. A WebSocket demo shows live generation. The code is MIT-licensed and research-only. The repo already passed 20k GitHub stars.