Picture this scenario: you awaken in the morning but are unable to speak. You can hear the words you want to say in your head but are helpless to enunciate them. This complete loss of speech is faced by the many sufferers of neuromuscular or neurodegenerative diseases such as Parkinson’s, multiple sclerosis, and ALS, or victims of traumatic brain injury or stroke. Now, try to imagine what steps would need to be tackled to teach a computer to speak for you. If you could train a machine to interpret signals from your brain, which avenue might make the most sense? Training a computer to identify and interpret how your brain generates and responds to the acoustics of speech sounds? Or teaching it to recognize how the brain coordinates movements of the vocal tract—including the lips, tongue, and jaw? As it turns out, a recent study published in Nature revealed that the latter provided an efficacious first-step in generating synthetic speech, using brain-generated signals to drive a virtual “vocal tract”, akin to a keyboardist moving their fingers upon the keys of their synthesizer to produce what are actually simulated tones and timbres. Using fully-speech-capable epilepsy patients—with temporarily brain-implanted electrodes to map their seizures—investigators recorded the activities from language-producing regions of their brains as they read passages of text. Using the audio of participants’ reading, researchers correlated the vocal tract movements required to make specific sounds. AI was then used to correspond the brain activity that was generated during their speaking with the physical movements required by the vocal tract to synthesize specific sounds to yield speech. While transcribers of this AI-synthesized speech fared better in understanding what was said when keying of off shorter lists of potential words, these first steps toward producing a machinated brain-driven speech prosthesis are significant. Next steps involve further refining and testing the system with patients that are unable to speak, and continuing these crucial advancements within machine learning, neuroscience, and linguistics together, are with little doubt the best bet to give a voice to those that are unheard.