I'm not a professional in VFX, but I work in television and do a lot of VFX/3D work on the side. The quality isn't amazing, but it looks like this could be the start of a Midjourney-tier VFX/3D LLM, which would be awesome. For me, this would help bridge the gap between having to use/find premade assets and building what I want.
For context, building from scratch in a 3D pipeline requires you to wear a lot of different hats (modeling, materials, lighting, framing, animating, ect). It costs a lot of time to get to not only learn these hats but also use them together. The individual complexity of those skill sets makes it difficult to experiment and play around, which is how people learn with software.
The shortcut is using premade assets or addons. For instance, being able to use the Source game assets in Source Filmmaker combined with SFM using a familiar game engine makes it easy to build an intuition with the workflow. This makes Source Filmmaker accessible and its why theres so much content out there made with it. So if you have gaps in your skillset or need to save time, you'll buy/use premade assets. This comes at a cost of control, but that's always been the tradeoff between building what you want and building with what you have.
Just like GPT and DALL-E built a bridge between building what you want and building with what you have, a high fidelity GPT for the 3D pipeline would make that world so much more accessible and would bring the kind of attention NLE video editing got in the post-Youtube world. If I could describe in text and/or generate an image of a scene I want and have a GPT create the objects, model them, generate textures, and place them in the scene, I could suddenly just open blender, describe a scene, and just experimenting with shooting in it, as if I was playing in a sandbox FPS game.
I'm not sure if MeshGPT is the ChatGPT of the 3D pipeline, but I do think this is kind of content generation is the conduit for the DALL-E of video that so many people are terrified and/or excited for.
I think producer roles are a little bit less ultra competitive / scarce as they are actually jobs jobs where you have to use excel and planning and budgeting.
Being a producer means being on the phone all the time, negotiating, haggling, finding solutions where they don’t seem to exist.
Be it in TV, advertising or somewhere in the media space, the common rule is that producers are mostly actually terrible at their jobs, that’s my experience in London. So if she’s really good and really dedicated and learns the job of everyone on set, I’d say she has a shot.
The real secret to being good in filmmaking is learning everyone else’s job. Toyota Production System says if you want to run a production line you have to know how it works.
If she wants to do VFX production she could start doing her own test scenes, learning basics in nuke and Blender, even understanding the role of Houdini and how that works.
If she does that - any company will be lucky to have her.
For context, building from scratch in a 3D pipeline requires you to wear a lot of different hats (modeling, materials, lighting, framing, animating, ect). It costs a lot of time to get to not only learn these hats but also use them together. The individual complexity of those skill sets makes it difficult to experiment and play around, which is how people learn with software.
The shortcut is using premade assets or addons. For instance, being able to use the Source game assets in Source Filmmaker combined with SFM using a familiar game engine makes it easy to build an intuition with the workflow. This makes Source Filmmaker accessible and its why theres so much content out there made with it. So if you have gaps in your skillset or need to save time, you'll buy/use premade assets. This comes at a cost of control, but that's always been the tradeoff between building what you want and building with what you have.
Just like GPT and DALL-E built a bridge between building what you want and building with what you have, a high fidelity GPT for the 3D pipeline would make that world so much more accessible and would bring the kind of attention NLE video editing got in the post-Youtube world. If I could describe in text and/or generate an image of a scene I want and have a GPT create the objects, model them, generate textures, and place them in the scene, I could suddenly just open blender, describe a scene, and just experimenting with shooting in it, as if I was playing in a sandbox FPS game.
I'm not sure if MeshGPT is the ChatGPT of the 3D pipeline, but I do think this is kind of content generation is the conduit for the DALL-E of video that so many people are terrified and/or excited for.