This page aims to keep a log of a new project I'm working on, a 3D animated film.
So what's an open source film, well it's a film which is created by only using open source (or free) software, in my case primarily Blender - as you can tell by the logo. All assets created for this project will be made available to the public. I'm still trying to figure out the best mechanism to do that.
The project is open for artists to contribute if they so desire. You can look at the collaboration page for the current "call for content".
I started by trying to write a short film, but the short medium was not suitable for telling a story I felt strongly about. I came up with a few concepts and I even wrote a bunch of scripts. You can read them here: www.azproductions.ca/writing.
I racked my brain trying to come up with something I can feel passionate about, but all I came up with is a heartache. It was a painstaking process which didn't result in anything worth pursuing.
I decided to pursue a more challenging endeavor. Make a feature film length story, but divide it into episodes. The idea is to create achievable milestones. Finishing off a 5-7 minute episode will be a lot faster than finishing off a 120 minute movie.
With that in mind I came up with a story line you can read here.
I'll keep this site updated with my progress and any output related to the Open Source Movie (OSM) -- I think acronyms make things sound cool.
So I think I'm drawing to a close on the script. I'm sure there are ways to improve it, but at this point I want to move to doing some pre visualization of the first scene of the script.
I've been debating whether I should hire someone to do a script analysis and suggest areas of improvement. My problem is this type of service is usually a hit and miss service. It would be nice if I have a bunch of people who would be willing to read the script and provide their independent feedback. This would be highly useful, but it's hard finding such an audience. Guess we'll see.
I thought I'd share the updates that I've been doing to the script. I use git to keep track of the different versions, as I find the color coding version is not enough. I do a lot of changes and I'd like to be able to roll back to them. Also git allows me to enter a commit message per change so I know what I changed. Anyway, here is the story update log:
I finally have a script. I'm still working on polishing it, but I thought I'd share the storyboard.
I spent the last couple of days frustrated with the flex rig file. I asked a question on blender exchange, but no one answered: https://blender.stackexchange.com/questions/167308/cg-cookie-flex-rig-linking-problems
There were 4 problems
Have to admit this behaviour is kinda weird all around. Not sure if it's because the file was originally 2.79 or what. When I use rigify with MB-Lab things were a lot smoother. Anyway, moving on.
FYI, the file can be downloaded from here
I made some updates to the Flex-Rig Assets:
I'll be uploading the update in the next while, as I'm still cleaning up the file.
I wanted to share here the steps to add your own clothing assets to the characters:
Depending on time I might do a video tutorial on this
I'm back at the technical work again. I think I settled on using the CG Cookie Flex Rig. The character is quiet descent and the rig is very nice. It needed IK/FK snapping for the arms and legs. I did some code splicing between the MB-Lab character with Rigify and pulled out the needed bits and pieces to implement IK/FK snapping on the CG Cookie Rig. That seemed to work pretty well. You can download the updated blend file here.
Here is my plan for the next while
Yet another update (YAU)... well it's been a while. Seems like the updates have been less frequent lately. The main reason for that is because I'm in the writing process. It takes me a long time to write something. I started off thinking I'll do a short film, but any story I thought about didn't take a short film form. It was either a feature length script or series format. Then I had some really negative feedback from a fellow, who claimed that he wanted to help me, but it didn't look like he did. Just a sequence of negative comments without any suggested improvements. To be fair, after I pointed that out to him, he back tracked and kept insisting that he's just being straightforward. Then gave me some feedback, which to be honest, was stuff I've already heard, but in a much more constructive way. One good thing that came out of this unfortunate experience is I decided to hire a guy to go over the script idea I had at the time. I spent a weekend with him and we hammered down some ideas. I thought it was very useful. The guy was knowledgeable and he did introduce me to Blake Snider's Save the Cat. That really helped.
Anyway, after that weekend, I spent a long time trying to work out a good outline for the story I had in mind using the Save the Cat beat sheet. Just when I had one done, my daughter suggested to me another story idea. She wanted me to help her write it. I decided to be a good dad and do that. The more I worked on her story the more it shaped up to be a very good fit for the open movie project. So, I decided to go with her idea and base the open movie endeavour on it.
I just finished draft one of the script and I'm working on revisions currently.
The story is about two kids who have to rescue a kidnapped baby and return him to his parents before Christmas day.
I'm still debating whether I should publish the script on this site.
Anyway, my next step is to polish the script and then get a bunch of feedback on it.
Been a while since I updated this blog. This is going to be a long haul project.
I finished a draft of the script. And I wanted someone to go over it. Honestly, it's hard finding someone to collaborate with. I decided to simply higher someone with writing experience to go over the script and polish it. I met one guy and we got to chatting. He had a few good questions, which motivated me to go back and re-examine the story. Sort of go back to the basics. I used the story I had already developed as basis for a new version of the story. I'm planning to spend a couple of days with that writer to fine tune it and make sure it's complete and captivating.
In the meantime, I ran into a really interesting show called, "Undone". It's been developed in a really interesting way. They call it rotoscope animation. Basically they film the actors and then paint over them, and create the rest of the elements in 3D or 2D animation. "Scanner Darkly" was done in the same way I believe. Here is a trailer
After watching that show it got me thinking about creating a new visual style for the show. After going through the animation test I've done, I came to realize the importance of real actors to get the emotions right. Especially, if I'll be using character generation tools like MB-Lab, I won't have the fine control to express the emotions. The tool just doesn't have the rig to do that.
What would be ideal is a mix of real actors and 3D animation. But doing something photo-realistic is simply beyond my means. I'm currently asking myself this question: Is there a visual style which can capture the actors emotions, but allow me to build a grand world without having a billion dollar budget?
Investigating...
Alrighty then... I've reached the end of a milestone. Took a bit longer than I expected, but I got there. I set off to determine what I need in order to actually make an animated short (which turned into a series). I knew that I wouldn't be modelling my characters from scratch, so I wanted to use an open source character generator of some sorts. The ones available were makehuman and MB-Lab. After comparing both, I decided to generate most of my characters with MB-Lab. Not saying I'll never use Makehuman; I will, especially for younger aged characters. For example MB-Lab doesn't generate characters younger than 18 years. However, I decided to concentrate my effort on seeing how to modify MB-Lab to fit my needs. I ended up creating a few additions to MB-Lab which I need:
To do both of the above, I needed to do so some coding. This took quiet a bit of time. You can look through the history in this blog to see what I did. Suffice it to say, it required me to learn how to create blender add-ons, learn how to create drivers for shape keys in blender and a bunch of extra mambo jumbo, to be able to make the face and phoneme rig work.
I also wanted to automate the lip-syncing as much as possible. I looked at existing add-ons to do that. But I decided to create my own: Yet Another Speech Parser (YASP). This has two components. A C program which uses pocketsphinx library to parse audio clips and generate a phoneme description in JSON format. Basically a JSON file describing the phonemes and their timing. I looked at a couple of speech parsing libraries out there, and I settled on pocketsphinx. The second part is a python blender add-on which takes as input the audio file and text transcript of the audio file and creates the animation. This actually worked more or less well. It creates a first pass animation, which I then fine tune. One of the weaknesses of MB-Lab is its lack of finer control over the face expressions. I think Makehuman has a better face rig, but I digress. It is what it is.
Along the way I got side tracked by trying to create an automated face expression tool, which uses an open source library called OpenFace to analyze footage and reflect the facial expressions from real footage on the animated character. I did try to use it, but I found that I can get better facial expressions through animating by hand. It's actually more fun too.
Once I started doing some simple animation, I quickly realized that the MB-Lab Rig isn't very nice. Thankfully, there is an add-on which generates a Rigify rig from the MB-Lab rig. Of course I had to tweak it and change it to work the way I wanted it to. That took a while. But I really like Rigify. It's a nice rig.
It's worth mentioning that I adopted Blender 2.8 while it's still in alpha stage. So I got involved in porting some of the add-ons to Blender 2.80 as well as opening tickets and submitting a couple of python patches. If you fish for my name you'll find me buried somewhere in the commit log.
Once I was at a point where the tools I created were mature enough, I decided to apply them and my skills to create an animated scene. I flip-flopped a bit with this. I started off thinking I'll do an independent short, then I changed my mind and decided to create an existing movie scene. After a bit of thinking I settled on the latter, but which scene should I do? I was first thinking of animating a Captain America Civil War fight scene, and I actually started. I created a run cycle, and then connected that to a car jump, but then I stopped going down that path. I just wasn't that into Marvel movies to justify spending a lot of time animating one of their fight scenes. I finally decided on a Star Trek First Contact scene. I'm a trekkie at heart and I'm not ashamed to say it. I love TNG (that refers to Star Trek: The Next Generation, for you non-trekkies out there). Anyway, I sorta settled on working on that and I got down and dirty with setting up the scene and doing the character animation, etc. Below is a video walk through of the final scene I ended up with.
The TNG scene I animated was training ground to see how I can use the tools I have at hand to create my series. As far as I can tell, these tools are sufficient to create game like animated scenes, sorta like this. The quality isn't gonna be extremely cinematic, but I really just want to get to create my story and do what I love to do, which is story telling and film-making
The next step for me is to start creating the animatics for my script. I'm still toying with the idea of pushing the TNG scene a bit further and creating some proper locations, polishing the lighting and atmosphere. The jury is still up on that.
Still working on the practice animation scene. I'm further along now. One thing I'm starting to realize with that scene is the cuts. There are some unnatural cuts that break up the action of the scene. They were done to insert the reactions of the actors. But when I'm trying to translate that into one continuous scene, the pauses feel unnatural, so I have to keep adding some actions to smooth the transitions.
Just a quick update. I finally decided on which scene I'll animate for practice, and it's this one. As I work through the scene, I figure out what works for me and what doesn't and how to improve my workflow. Here is what I have so far:
To start off I decided to make a short animated scene based on: https://youtu.be/qXPOl6EjbWg?t=96. The idea is to use that as training ground for setting up my animation workflow. The first part of this shot is Captain America running towards Winter Soldier. Decomposing the work further, I decided to make a run cycle. Here is what I have. Animation is tricky. I struggled getting a descent run cycle until I used a reference run. This is the key to getting good animation. What I have probably requires more polishing, but I don't want to get bogged down with all the details. I want to have a passable animation, then refine it later once I have a scene put together. We'll see how that works.
I've been working on setting up my animation flow. My end goal is to use MB-Lab to generate all my characters. However, I see cases, especially if I need 3D models of younger children, then I'll have to work with makehuman. For now, I'm concentrating on MB-Lab. To setup a character for animation, I need to do the following:
Here are some videos. They are silent work flows. Useful only if you're willing to just speed through them and see what I'm doing
Well, I got around to doing some "modelling". Between quotes because I'm using the MB-Lab blender add-on. I did quiet a few tweaks to it, specifically, with the Eye shader and the face rig. I also went back to the original Manuel Bastioni Lab skin shader. He got it right the first time. The results, to my eyes, are better than what the official MB-Lab has. Anyway, I think I have my main character, Gemma. Still needs hair, but hey, it's 2019 :)
Even though, I wasn't really modelling, per say. I had to use a reference for the character. I used her: https://www.instagram.com/mona__hala/
If this ever gets made into a live action series, I'll ask her to play the main part... haha. Anyway, my point is that using a reference gave me a target to shoot for.
I'm getting excited, I think. Got my two year plan roughed out. My objective is to get the animated series draft 1 done in 2 years. What does that mean? The way I'm currently thinking about it, and it's very possible this will change as I get more experience, is to finish a non-polished draft of the episodes. Basically, all the elements will be there, but not in their finalized form. For example, there will need to be quiet a bit of ruined buildings. There are a lot of details in a ruined building; rubble, dust, etc. Modeling such a building is time consuming and detracts from progress, I think. My goal is to use a place holder for such detailed models. It'll still look like a ruined building, but not as detailed. I have to see how that'll work with the animation. This might not be a general rule. I might make a lot of exceptions, but it's what I'm thinking of now. The goal is not to get bogged down with all the details, rather do an iterative method, where I start with a rough draft and keep polishing it, until I get it to where I want.
Here is my current plan. It'll be fleshed out as I move along
Yup! Can you believe it? I not only have one script I have an entire season, 10 episodes. Granted each episode is about 3-5 minutes long, but hey, it's a full story. The way I figure it what I wrote can be a Youtube/online season, or it can be a 45 minute pilot episode. Since I'm making it, I'll go with the former.
After months of brooding and moaning and chest beating, I sat down and wrote all 10 episodes in two days. Weird, eh?
Here comes my next dilemma. I want to share it publicly, but like any artist I have some insecurities that it'll get torn to shreds. Of course, there isn't that many people (if any) following this site, but I'm still not sure if I should share it publicly.
Here is what I'm thinking. I'll keep it under wraps for now and I'll share the output of the actual production cycle publicly, including all 3D resources, files, etc.
But I'm looking for feedback, so if you're interested in reading the scripts and providing constructive and detailed criticism ping me here.
It's been a while since I updated this blog. I've hit a bit of a lull. Work has been taking all my time. I've been thinking though, where to go from here. There are two directions. I can keep going down the technical path. Try to enhance my facial expressions system. Or I can start revisiting my story. I really want to produce something creative. So I decided to head down the creative road. I'm working on a short film idea that has some potential. Once I have it well formed, I'll share it on here. I still like the direction the story I have outlined here is taking, but I'll take a break from it and explore a slightly different world.
My dad used to tell me write what you know. I never understood what that means. If I only write what I know, I'll write really boring stories. But I think I have modified his advice to something that makes more sense to me. Write the characters you know. The most important aspect of a story is the character. If you analyze any movie you like you'll come to the same conclusion as I did. Characters are what draws you into the story and keeps you glued to the screen or the pages. They are the ones who make you feel something. You relate to them, and you care what happens to them. The challenges and the "plot" is what brings out the personality of the character. They are the events which show what the character is made out of; their courage; their compassion; their fears; their love. And these aspects is what makes the viewer identify with your character.
As a writer the character, I believe, has to come from a personal place. In essence they are a part of you. If you don't write from what you know, the character you write will feel contrived and fake. But if you write from your own experiences, then they might, just might, be relatable. For this reason, I decided to write a character which I have personal experiences with. The world I'm thinking of is not a real world. It's a futuristic world, which I haven't seen done before. The character will leave the world she knows behind and be thrust into the other one with no way back. She will need to navigate this new world and rediscover the purpose she thought she lost.
A simple render from the video I linked below. I think there is potential for this tool. It won't give you the best results out of the box, but it might give you a solid foundation to build on.
Got the facial expression add-on working. It can take a video file or a FACS csv file. The csv file takes precedence if present. If a csv file is given, then it parses the file and smooths the data, then animates the expressions. If a video file is provided, it runs it through OpenFace which produces the csv file, then the addon proceeds as previously described. There are a few parameters presented to the user:
Well, I know I said that I might be reaching the point of diminishing returns, but alas, an interesting twist came along. I learned about an open source library call OpenFace which can capture facial expressions, as well do face recognition etc. Decided to learn more about it:
Working with NumesSanguis: https://github.com/NumesSanguis I gained a better understanding of FACS AU: https://www.cs.cmu.edu/~face/facs.htm, and OpenFace: https://github.com/TadasBaltrusaitis/... Basically OpenFace is able to extract features from a video and convert them to FACS AU. What I'm trying to do is grab the data generated by OpenFace and use it to create facial animation.
https://github.com/NumesSanguis/FACSv... does that.
My thoughts are to streamline the process a bit. The idea is to hit a button and it generates the facial animation.
Still a work in progress. You can take a look at: https://github.com/amirpavlo/BYASP
One cool thing I did is write a script which takes the AU data and smooths it. But more over, I don't want to insert a keyframe on every frame of the animation. Would be crazy to do that, and would be very difficult to adjust later on. So I got the idea of finding the peaks and troughs of the graph, then inserting key frames there only. I'm pretty pleased by myself.... haha. And to prove it works here is a diagram
Just completed some modifications to the ManuelBastioni YASP component to add some smoothing to the animation. I think it produces acceptable results for the first go. Most likely, I'll need to jump in and tweak later. But it does save a tone of time compared to if you had to go in manually and find where each phoneme is, etc. If nothing else, the mark pass of the add-on, where it marks the location of all the phonemes, provides value.
I think I got to a phase with the Automatic Lip Sync Project where any more effort I put in tweaking it would be beyond the point of diminishing returns. The only other thing worth putting effort in is making this a separate add-on and porting it to windows. I can see how this can be useful when working with makehuman characters. But I'll cross that bridge when I get to it.
The next step for me is to start using the tools I've created/modified to make a short scene. This will be key in working out the kinks from my workflow.
Here is a video I made playing around with lip-sync. The main purpose here is to show how the lip-sync and the facial animation can be combined. Note, this was recorded in real time, so it has pretty low fps.
YAAY... I have something working. Now I know you might think it ain't great animation. But in order to get to great, you need to pass this post first. Keep in mind, this is all automated. In a few clicks, you can get a lip-sync. There are still some improvements I'm planning in two areas. 1) The poses. I need to clean up the poses for each phoneme a bit more. 2) Keyframing. I'm going to introduce a polish step to clean up the keyframes. This step will require some trial and error to figure out a bunch of lip-sync heuristics. As an example, I don't have to turn off a pose completely if it'll morph a few frames later. Things like that, I believe will make it look a bit better.
Of course at the end, it'll require some manual clean up to really polish it.
I'm also planning on adding some emotional meta data. This information will translate into corresponding facial expressions and head movements. I'm thinking the input will be a wave clip and an XML file, which would include the transcript and emotional meta information, sorta like this
<transcript>
<panic>We have to get out of here</panic>
<sad>but with a broken leg he can't come with us</sad>
</transcript>
Anyway, still thinking on it.
well.. I finally have a program that takes a speech clip and breaks it down into its phonemes, with a start time and duration for each phoneme. The JSON file looks like this:
}, {
"word": "letter",
"start": 945,
"duration": 36,
"phonemes": [{
"phoneme": "L",
"start": 945,
"duration": 6
}, {
"phoneme": "EH",
"start": 951,
"duration": 7
}, {
"phoneme": "T",
"start": 958,
"duration": 8
}, {
"phoneme": "ER",
"start": 966,
"duration": 16
}]
}]
The next step is to take this information and convert it into animation data for a ManuelBastioniLab character. The code is here. Still needs work to make it compile and build easily. (Linux only... sorry )
I've hit a rough patch in my energy. The lip-syncing project requires significant development effort. I'm writing a program in C to do the initial lip-sync animation. However, my day job is also a Software Development job. I'm having a difficult time spending a few hours programming after having spent 10 hours doing programming work... But I just have to suck it up and get it done. I know what I'm suppose to do.
One good piece of advice I got from a couple of professionals in the rigging/animation field, is not to apply the animation directly on the model. Rather give the animator the option to select the key frames to insert. I've been thinking of a good work flow to do this.
I'm now considering, first marking the audio file with the phonemes. Then provide a way from the animator to jump to each phoneme or just use the time line to scrub to the marker. He'd then hit a button to insert the appropriate key frames. I think this would be easier to undo step by step if need be. Then the animator can fine tune the shape of the mouth, before moving to the next phoneme.
I'll have to do some trial and error to find the best workflow.
Alrighty, then. I created a phoneme rig for ManuelBastioni Lab which looks like this:
As you can see each one of these dials represent the mouth shape which is formed to make the associated sound. These were taken from these references here and here.
The next step is to write a blender add-on which takes an audio clip, recognizes the phonemes, and then animates the phoneme rig shown above. Sounds simple eh? As I mentioned in earlier posts, there is a tool called rhubarb which does a similar job, but you know what, after flip-flopping on whether I should create my own or use rhubarb, I decided to create my own. And so, here is an outline of what the program will do:
Well, ok, rhubarb works for linux, they just don't mention that in their README.md. That's good. I tried it out. It works... but I think I'll still invest some time in trying to understand how to use pocketsphinx. Seems like a useful skill to have.
However, to keep moving forward with the Open Movie Project (OMP), I'll see how I can use rhubarb to do lip-synching. There is already a blender plugin, so I need to see how that'll work and what kind of results I can get out of it. I would like to have tighter integration with MB-Lab though, to streamline my workflow.
Here are the tasks I foresee
Been looking at open source Automatic Speech Recognition (ASR) Engines. What I want to do is integrate with an ASR engine so I can run speech audio through it and generate timing information on when words (more particularly phonemes) were spoken. I can then take this information and create a first pass lip-sync animation in ManuelBastioniLab.
I looked at a couple of ASR engines one called "DeepSpeech" and the other called "pocketsphinx". The former is written in C++ and the latter is written in C. Man pocketsphinx is a whole lot easier to understand. And it has the functionality I want. Deepspeech doesn't produce timing information. Well they do, but it's not exposed in the API, which means if I need to use it, I have to take the initiative and expose this information in the API. I actually thought about it, but it's a lot more work than I'm willing to undergo at the moment.
pocketsphinx is used in rhubarb-lip-sync. However this application runs only on windows and macs (as far as I can see) and I'm not a windows user. I have seen the light and abandoned windows. In other words, I need to have something working for linux. I'm cool if I only support linux.
rhubarb-lip-sync is also a generic application designed to work with multiple different application, like after effects. Anyway, the end result for me is to develop an application which works very similarly to rhubarb-lip-sync, but is directly integrated in blender.
We'll see how that goes.
P.S. If you're interested in learning more about pocketsphinx, here is there wiki.
The Research & Development phase of this project is turning out to be more intense than I first anticipated. I'm concentrating on my character animation workflow at the moment. My goal is to get/create a set of tools which enhances the animation process.
For the last while I have been working on a facial animation rig for the ManuelBastioniLab. The original author has decided to stop supporting and developing further features. A group of people decided to pick up the slack, including me.
The characters created by the ManuelBastioniLab are good and the rig is very usable, however, creating facial expressions require animating shape keys, which makes it difficult to animate. I decided to create a face rig to drive the shape keys. As usual, the concept sounds easy, but the implementation is riddled with technical details I have to wrap my mind around. It took a bit of time to complete. However, the first version of the rig is now available on the new official MB-Lab addon, and also a version of the addon I'm maintaining, here.
The next project I'm working on is to create a lip-syncing feature in the Lab to be able to create lip-syncing animation. I've been looking at different Automatic Speech Recognition (ASR) open source software to work with. The idea is to run a speech clip through the ASR engine, which produces timing information of when phonemes are spoken, I take this data into blender and create a first path lip-sync animation. There is Rhubarb and an equivalent add-on for blender that do that, but it's only for Windows and Mac, I'm looking at either porting it to linux, or developing my own. Being who I am, I'm leaning towards developing my own, just to learn how things work
2018 passed pretty quickly. It amazes me how fast time goes. The passage of time has its advantages and disadvantages. One of its advantages is it motivates people to accomplish goals. If I had all the time in the world to get something done, it's likely I won't get it done with any urgency. But I know my time is always running out. A fact which motivates me to work and get things done.
Putting philosophy aside, I think this week has been somewhat beneficial. I worked through the story with Nicole a bit more. I think we're working through the plot holes. I've also figured out something new about the ManuelBastioni Lab. It has a very extensive set of shape keys, which in theory should be a replacement for a facial rig. You know what that means, I'm using it in my proof of concept short. It's superior to Makehuman. The control it gives you over modifying the character is better, the skin shader is better, the weight painting is a lot better, the muscle system and rig are awesome. Only disadvantage is it can't be used to create small children, so I'll need to use Makehuman for that.
The next step is to fit the ManuelBastioni character with some clothes, then we're set to start building the location and moving forward with the 1 minute short. I'm excited. Hopefully, I'll have something completed in the first month of 2019.
I'm disappointed Manuel Bastioni has decided to stop supporting the lab. I understand where he's coming from. No one has showed him any support and it does take a lot of time to get this lab working properly.
I put some time into porting it to 2.80. I have taken some clothing assets from Makehuman, and I'm planning to make them available for the ManuelBastioni Lab. Maybe that'll encourage others to contribute assets, which will be all around useful.
I do think having a facial rig is useful, but I hate weight painting. I need someone to help out with that. Hopefully, we'll be able to get that project completed in 2019.
YAAM seems to be gaining some traction. I put a post on blendernation: https://www.blendernation.com/2018/12/28/free-download-asset-manager-add-on/ Got 45 shares... which ain't bad. And another dev is also working on it, so I think it could get some cool features implemented in 2019, which will make it even more useful.
I also made some small contribution to the Blender add-on code base, mainly around porting add-ons from 2.79 to 2.80. Not too shabby. Looking forward to contributing in more significant ways in the future, if I get an opportunity.
Well, signing off 2018. Happy New Year to all. And hope you accomplish your dreams in 2019.
You can read the story here.
As we draw 2018 to a close, I'm updating what I have to do on the Open Movie Project. This is not a small project by any means. It spans many fields, from software development, to writing, pre-production work, 3D production and post production. Bound to keep me busy. But baby steps.
As I mentioned before I'd like to make a 30 second to 1 minute proof of concept short. This is the first fully animated series I've ever worked on. I need to figure out a workflow that suits me. A proof of concepts should do it for me. I already have a short scene written up, different from my previous post. I'm now working on storyboarding. I figure I need to do the following:
Obviously this is all going to be iterative. I need to really pay attention to how the final scene will work to avoid any time consuming re-work.
In terms of Software development, there has been a few activities going on:
In 2019 I hope to accomplish the following software development goals:
There will probably be other minor updates I'll have to do over 2019.
Gonna be a busy year. We'll see if we'll have any contributors on the project.
https://github.com/amirpavlo/YAAM
Go get it. Try it out. Leave suggestions.
I'll be using it for the 30 second chase sequence next on my list. I'm pretty sure I'll think up of more features to add as I start using it.
The next software project for this Open Movie will be a distributed renderer. I already have a python command line version, but I want to create a C/C++ one with a GUI interface. I have a design, but I'll probably want to get some animation going before I tackle that project.
It's a lot of work, especially when I have to learn how to interface with the Blender Python API. It's a lot of visits to the API documentation, and looking at existing add-ons to understand how they did things. Not to mention a lot of trial and error... But I'm getting there.
I'll have it done by Christmas 2018... It'll be my Christmas gift for myself! And for whoever wants an asset manager.
I decided what to call it...
YAAM: Yet Another Asset Manager
I think it's kinda clever... yet not completely original :) It's a spin off YAML: Yet Another Markup Language.
As I was working through my pipeline, I ran in the first obstacle. I need an asset manager built into blender. An asset manager is a key piece of the pipeline needed to stay organized. Otherwise, I'll be forgetting where everything is.
I looked around. I found one, called Asset Flinger. I looked through it and it looked cool. But it had two problems: 1) it only worked with .obj files and 2) It only works for Blender 2.79. I'm building my entire pipeline around Blender 2.80. So I decided to convert it to blender 2.80. After converting it, I decided to spring forward and actually write my own Asset Manager, which I can use for the open movie. The idea is I should be able to work with different types of assets; obj, 3ds, blend, images, materials, etc.
I forked the Asset Flinger repository. The 2.80 compatible add-on is available here.
To be honest I found one for 2.80 on blender market for $40, but I decided against buying it. I want to build my own. I'm now 100% in Software Development mode.
I had only dabbled with blender add-ons, but nothing serious. Writing this add-on has been educational for sure. It's been 3 days working on it. I got the interface there and currently working on the functionality. Here is a PDF of my design document. And below are a couple of screen shots. Once I have it complete I'll upload the code on github.
Although I started with the Asset Flinger code, what I'm writing is its own add-on. I'll create a new repository for my Asset Manager. Felt like I should clarify this, since the images below carry the name "Asset Flinger", but the final add-on will not be named that.
Been a long week at work. Hadn't had a chance to do much on this project. But now I have. I'd like to work out the kinks from my animation workflow. To do so, I'm going to make a short scene with multiple shots. Mainly an action sequence. I figure if I'm going to do something, might as well do something difficult and hone my skills. Here is what I came up with
Story
Just thought it would be cool to share this video. Love Blender 2.8. The interface, the way it works... Thanks to Andrew Kramer who introduced me to blender back in 2006. It was still Blender 2.3 (or 2.4, can't remember) back then... How far has it come. As a guy who works on Open Source, I appreciate the amount of effort and time it takes to develop software like this... Keep it up, Blender Foundation. If you use blender, think about making a monthly contribution.
I think there is an improvement, no? But going forward, I'll probably just render my test animations in EEVEE. Cycles is a lot more time consuming. But everything looks better in cycles. I think EEVEE needs different material setup . New blend file is here.
Have to admit, I'm a bit rusty with animation. Last I did some Blender animation was on "Your Song". Anyway decided to get my hands a bit dirty. I did a quick walk cycle. It's still not perfect, needs some smoothing and overlapping action. This is almost pose-to-pose at the moment. I'm going to improve on it over the next couple of days. Then I'll do a couple of other walk cycles. I'll be using Kevin Parry for reference. I think he's awesome. Here is his walk cycle reference video. The next step is to do a short 30 second scene to work out the kinks from my work flow.
I rendered the scene twice, once in cycles and the other in eevee. I still think cycles is superior. But I'm guessing EEVEE will need it's lighting tweaked to work around its limitations. I uploaded the blender files here. You might need to re-map them because one file links to the other. Also the textures might need to be remapped.
I still think cycles does a clearly better job rendering in low light. It mimics the shadows properly. I admit the materials are still not quiet there. But even without great materials cycles does a pretty good job at rendering... I'm starting to get the feeling that I might want to render the final movie in cycles. I can always have two versions. The rough copy can be in EEVEE.
I updated the camera_dolly_crane_rigs.py to 2.8.
This addon creates some nice controls for the camera to make it easy to operate.
I was feeling a bit lost, jumping around between different tasks, which is not an issue in and of itself, but without a plan, it could lead to not accomplishing much. I decided to step back and put a task list together. I use a mind mapping tool. This task list will be continuously changing. Unfortunately, I don't have a way to actually make the mind map viewable on the site. But here is a link to download it. You can use "freeplane" for linux or "freemind" for windows to open it.
You can visit this page for a text based version of the map.
Okay, I learned this the hard way. When you export from makehuman the default for the character is the "A" pose. If you try to change the T-Pose and import into blender using MHX2, the bones are kinda screwed, looks like the axis are flipped. Tried to fix it with no luck, so I chose the simpler method.
My solution is to change the A-pose to the T-pose and set that as the rest-pose. I think it should work, have to try and parent the clothes next and see how that works.
Some technical mambo jambo to follow.
If you need to import makehuman using MHX2, you'll need to follow the instructions here.
The scripts provided works fine for 2.79, but the API has changed in 2.80, so you'll get a python error when importing. I fixed the script. I wanted to push a patch into their repository, but don't know how. Until I figure it out, here is the diff. You can apply it on your version of the MHX2 script under 'scripts/addons/import_runtime_mhx2/utils.py' to get it working:
diff utils.py ../../../../2.79/scripts/addons/import_runtime_mhx2/utils.py
109c109
< ob.select_set(value)
---
> ob.select_set(action=('SELECT' if value else 'DESELECT'))
136,137c136,137
< ob1.select_set(False)
< ob.select_set(False)
---
> ob1.select_set(action='DESELECT')
> ob.select_set(action='SELECT')
I'm starting to realize the limitation of the Makehuman rig.
I spent the last couple of days experimenting with Blender 2.8, rigging and the such. It feels like I didn't accomplish much. But I was able to get a Makehuman model looking, meh. I took some elements from the Manuel Bastioni skin material and used it for the model. Rendered two images one in EEVEE and the other in cycles. I want to finish my face rig for the Manuel Bastioni model. I'll probably do that and then create a bunch of facial expressions, just to do some animation training. Or I might make the makehuman face rig a bit more user friendly, do some facial expressions and then move to finishing up the Manuel Bastioni face rig.
Here are my renders so far.
Well using autorig or even the rigify face rig was a failed experiment. Either way, I think it'll be a good idea to refresh my rigging skills. I have done a few before, but I'm far from a pro. And I'm out of practice. Therefore, I'll take the plunge. I decided to get better acquainted with rigging. I'm not looking forward to weight painting; seems like a lot of trial and error. I updated the Resources page with more courses on Rigging. These are the best I've run into.
Okay, I think it's great. It produces really nice characters and the rig is awesome. Only problem is that it doesn't have a facial rig. I'm gonna have to try and create my own using the Autorig add-on. It's a paid add-on but not expensive at all. Let's see if I can create a hybrid rig with the Maneul Bastioni provided one. Stay tuned.
I had a great discussion with Nicole Panton on the story and how to improve it. Notes are here.
Rigging is a pain in the butt. I've been looking at the easiest way to go about this. It looks like using a Makehuman model and importing it using mhx2 script, documentation here, with the settings shown in the attached image gives me what (I think) I want.
The MHX body rig is actually pretty good, as far as I can tell. Beside there is a facial rig with pretty decent weighting, which should allow me to make pretty good facial expression. I'm gonna have to test it some more. I'll be adding some screen shots. I'll also upload the blender file as soon as I have the rig working.
It'll need some modifications. I'll have to create a better way to control the face. It's finicky at the moment.
The next challenge is going to be clothing. Makehuman clothing is not great, so I'll need to see what I can come up with.
I also need to test the Manuel Bastioni lab, issue with that is the developer has stopped the project. So it's not going to be improved on.
UPDATE:
Found this addon which allows using rigify with Manuel Bastioni lab. I'm still looking if there is a solution for having a face rig with Manuel Bastioni
Well that says it all. I've been trying to do a test animation. To do that I need to get the character rigged properly and it's freakin' hard. I'm drowning in a sea of bones, facial rigs, drivers and shape keys. Just gotta keep at it.
I got my first tidbit of feedback today. It was simple and to the point, but I think it's valid and worth thinking through. The Chaos character needs to be developed further. I agree. I have a few questions about Chaos:
When I think of Professor Chaos I think of Doctor Horrible in "Dr. Horrible's Sing-along-blog".
Basically, Chaos is aptly named. He want to plunge the world into chaos. He has been betrayed by the system (still need to determine how) and now he wants to destroy the system completely, preferring chaos over order. Will spend some time fleshing it out.
I've been watching a lot of animated films lately. First to decide on the style I want and second to set my expectations. This is going to be a long haul project. Hopefully, the quality will improve over time, especially if other artists participate. But as it stands at the moment, I'll be generating my characters using Makehuman, Mixamo or ManuelBastioniLAB. Initially, I wanted to use cartoon characters similar to Disney films like Tangled or Inside Out. However, the only character generator for this is CG-Cookie's Flex Rig. I like the characters it produces, but the clothing is not great. I can go down the road of modeling clothes for the characters, but as I continued writing the story, the cartoon character look didn't seem to fit in how the story is turning out.
It appears I'm setting on a mix between cartoon and realistic looking characters. These can be produced with the character generation packages I mentioned previously, and still fit within the story. I'll probably need to rig them myself using Blender's Riggify.
My hope then is to have quality somewhere in the ballpark as video games:
These are still pretty high quality, so we'll see how it goes. Animation plays a big part on how real something looks. Some of the things I'll defer for later in the project are things like clothing/hair simulation. I'll start by using mesh clothing, then as I get better and faster I'll add clothing and hair simulation.
Of course all this is up for change. I'm still at the very beginning.
I added all my scribbles on the site here. This is my usual process. I just start writing random thoughts until something clicks. Then I ask myself questions like what's the world like? Who are the characters? What are the relationships between the characters? Why do they like or hate each other? and so on. Then I answer these questions. Out of these random ideas I try to write a 10K foot view of the story, then I keep iterating through the story adding more details in each iteration. Currently, I'm outlining the episodes. After this, I'll do another pass and refine and add more details, hidden themes, re-write some of the episodes, etc.
I moved the below paragraph from my initial entry on this page. I tried to keep the first entry as an introduction without any philosophical thoughts. But below is my reasoning for doing this project and how it evolved from a short film to an episodic series.
----
The main reason I was looking to do something short was because I feared not getting the project done. I hate starting projects and not finishing them, gives me a sinking feeling in my stomach. I asked myself what's the motivation of doing this project? Do I want to be a famous filmmaker? It would be nice, but no. After thinking about it for a while the only reasonable answer was to make something I enjoy. To be honest, I want to do something I'm passionate about. Get good with Blender. And express my creative energy. If that results in a bunch of people watching it, then cool. If not, then at least I would've made something I'm proud of. People's opinions should be secondary. Doesn't mean I don't take other's input seriously, but I don't want people's opinions to shape my path.
I ended up settling on doing a long form episodic series. The season would tell an entire story, but it will be divided into a series of episodes, which would be between 7-10 minutes.
I'm currently writing the story and we'll see how it goes.
----
There are a few resources to keep track of. I'm not a modeller and I don't want to spend time modeling characters, when I can use a good character generator. At first I was thinking of using CGCookie Flex rig, but then I want to get a slightly more realistic model. There are three tools to consider
I quiet like Fuse 3D, the models are not very high poly, but they still look good. Hopefully, I'll have some animation tests happening soon. I'll have to rig them first though
I'm gonna geek out a bit here. Blender 2.8 is shaping up and I'm pretty excited about it. The coolest feature I'm looking forward to is EEVEE; the built-in real time renderer. I already tried it and it gives pretty good results... And did I mention... It's freakin' real time. What does that mean? It means that I can work on the animation and have the final result in EEVEE run in realtime. Of course cycles would probably provide higher quality results, but at the cost of compounded render time. So I'm planning to have all my effects, textures and materials in both EEVEE and Cycles.
Of course there are some disadvantages I have to be aware of. First off, Blender 2.8 is still Beta. So there are bugs. However the official release is coming out early next year, so good news there. Many of the add-ons are still not Blender 2.8 compatible. This is going to be a pain. There are a few add-ons I use, which probably won't work in blender 2.8.
My workflow will need to accommodate both Blender 2.79 and Blender 2.8. I'll have to use some of the add-ons in blender 2.79 and then import the result in 2.8. I'll also look at modifying the add-ons to work in Blender 2.8, which means I'll have to get down and dirty with Python programming. Should be interesting.
I've been trying to come up with a workflow for this project to keep myself organized. Here is what I came up with so far... Nothing earth shattering, but at least it's a structure
I suspect this project might take 2 years or so.