f4d3c5fa81 | ||
---|---|---|
extras/VegasImport | ||
lib | ||
src | ||
tests | ||
.gitignore | ||
CMakeLists.txt | ||
LICENSE.md | ||
README.md | ||
package-osx.sh | ||
tools.cmake |
README.md
Rhubarb Lip-Sync
Rhubarb Lip-Sync is a command-line tool that automatically creates mouth animation from voice recordings. You can use it for characters in computer games, in animated cartoons, or in any other project that requires animating mouths based on existing recordings.
Right now, Rhubarb Lip-Sync produces files in XML format (a special text format). If you're a programmer, this makes it easy for you to use the output in whatever way you like. If you're not a programmer, however, there is currently no direct way to import the result into your favorite animation tool. If this is what you need, feel free to create an issue telling me what tool you're using. I might add support for a few popular animation tools in the future.
Mouth positions
At the moment, Rhubarb Lip-Sync uses a fixed set of eight mouth positions, named from A-H. These mouth positions are based on the six mouth shapes (A-F) originally developed at the Hanna-Barbera studios for classic shows such as Scooby-Doo and The Flintstones.
How to run Rhubarb Lip-Sync
Rhubarb Lip-Sync is a command-line tool that is currently available for Windows and OS X.
- Download the latest release and unzip the file anywhere on your computer.
- Call
rhubarb
, passing it a WAVE file as argument, and redirecting the output to a file. This might look like this:rhubarb my-recording.wav > output.xml
. - Rhubarb Lip-Sync will analyze the sound file and print the result to
stdout
. If you've redirectedstdout
to a file like above, you will now have an XML file containing the lip-sync data.
How to use the output
The output of Rhubarb Lip-Sync is an XML file containing information about the sounds in the recording and -- more importantly -- the resulting mouth positions. A (shortened) sample output looks like this:
<?xml version="1.0" encoding="utf-8"?>
<rhubarbResult>
<info>
<soundFile>C:\...\my-recording.wav</soundFile>
</info>
<phones>
<phone start="0.00" duration="0.08">None</phone>
<phone start="0.08" duration="0.16">M</phone>
<phone start="0.24" duration="0.15">AA</phone>
<phone start="0.39" duration="0.04">R</phone>
...
</phones>
<mouthCues>
<mouthCue start="0.00" duration="0.24">A</mouthCue>
<mouthCue start="0.24" duration="0.15">F</mouthCue>
<mouthCue start="0.39" duration="0.04">B</mouthCue>
<mouthCue start="0.43" duration="0.14">H</mouthCue>
<mouthCue start="0.57" duration="0.22">C</mouthCue>
...
</mouthCues>
</rhubarbResult>
- The
info
element tells you the name of the original sound file. - The
phones
element contains the individual sounds found in the recording. You won't usually need them. - The
mouthCues
element tells you which mouth shape needs to be displayed at what time interval. Thestart
andduration
values are in seconds. There are no gaps between mouth cues, soentry.start
+entry.duration
=nextEntry.start
.
Tell me what you think!
Right now, Rhubarb Lip-Sync is very much work in progress. If you need help or have any suggestions, feel free to create an issue.