So, my general idea is to have about 8 hours of work for 8 minutes of film. That’s something I think someone can put in the sweat for, and if the 8 minutes comes out reasonably well they’ll feel it was worth it. That being my goal, the first hurdle is finding out which animation techniques we can utilize to as quickly communicate important states of the character through their body language.
Searching various auto sync with audio terms, the easiest candidate for automation came from using a tool in blender that can sync an audio sound with keyframes, specifically the option is called Bake Sound to F-Curves:
That being said, after testing and googling I find there’s very little control over the created F-Curve, so after watching the tutorial here: https://www.youtube.com/watch?v=OAes-ITNaGA&t=490s I learned how to use drivers. In this process, you basically assign the F-Curve to a created variable, and then you can link another keyframe to that variable’s values. The main reason beyond organizational cleanliness, is that the driver can have its inputs modified, so by adding a derivative layer of the baked F-Curve we can now modify it. In this case I had to turn the F-Curve negative because the mouth opens when the position of the mouth animation is moved downwards, while audio waves when there is sound move upwards. You can see that the input F-Curve is var, and in the Expression line I can adapt that with a formula, which in this case is just a negative sign.
Afterward, I added the Audio file itself so the animation render would include the audio. Here’s the first initial draft, which I’m very pleased with and I think validates that sound synced to mouth movement does a good enough job.