Author: admin

  • Comparing the Cinematic Workflows of UE and Unity

    Comparing the Cinematic Workflows of UE and Unity

    Trying to come up with some content for my blog, I figured I’d try making a cinematic I’ve already made in Unity, using UE. When the starting point is just making a cinematic, without having a game to consider, due to its superiority with photorealistic lighting, UE would nowadays be a more probable choice for the job. However, I’ve worked on cinematics with both, and wanted to get into the weeds of the strengths and weaknesses of both, regarding the basic tools that making a cinematic would revolve around.

    For some background, I used to do 3D modeling, textures and some concept art for games, and in the recent years, I’ve continued that with a game project I’ve been slowly trudging on with alongside the freelance work making game trailers.

    Below is a split-screen render of the two versions of the cinematic I made in the engines.

    Split screen with both renders combined, here’s Unity and UE separately. Music by Olli Oja.

    A simple cinematic in a dimly lit, foggy setup like this can be made to look pretty much the same. However, digging deeper topic by topic reveals more of the limitations and possibilities of the engines.

    Versions I used:

    • Unreal Engine 5.5
    • Unity 6.0 with HDRP

    The topics covered are determined by the relevance for cinematics, and to some extent, what I found interesting to explore. No doubt there are features I don’t get into that are also important for cinematics, like terrains, the MetaHuman tools and assets of UE, and particle systems to name a few.

    1. Structure of the sequence
    2. Cameras
    3. Animating characters
    4. Lighting
    5. Testing reflections
    6. Post processing effects
    7. Playing particle effects from timeline
    8. Rendering
    9. Working with assets
    10. Conclusion

    1. Structure of the sequence

    To start with the basics, the shots and animation tracks can be arranged in a similar way in the “Sequencer” tool of UE and “Timeline” tool of Unity. Animation tracks can be placed on either the main timeline where you have your shots, or inside the nested timelines of the shots. However, looking closer, there are some differences on details like track types and how you’d access the keyframes.

    Track types

    List of track types
    UEUnity
    ShotsControl Track
    FolderTrack Group
    Object Binding TrackActivation Track
    Animation Track
    Event TrackSignal Track
    Audio TrackAudio Track
    Playable Track
    Visual Effect Control Track
    Camera Cut Track
    Subsequence Track
    Time Dilation Track
    Fade Track
    Level Visibility Track
    Data Layer Track
    Media Track
    The track types available out of the box, with corresponding ones on the same row. The list can be expanded with plugins. On the UE side, the list doesn’t include “sub-tracks” that would be added inside the Object Binding Track.

    Clarifications:

    Object Binding Track (UE)

    Can be “Spawnable” (enabled only when needed in the sequence) or “Possessable” (enabled otherwise as well). Contains both the activation and animation keyframes of that object.

    Event Track (UE) / Signal Track (Unity)

    Fires events from specified blueprint (UE) or script (Unity).

    Control Track (Unity)

    Nested timelines would be placed in a control track, so it’s the same as the “Shots” track in UE which is added automatically when you make the sequence.

    Camera Cut Track (UE)

    An alternative way of handling camera cuts (instead of having the cameras in their nested clips). You can have cameras that exist in the sequence all the time and cut between them. Instead of “cuts”, they can be also be made smooth transitions between the cameras. The same thing could be done with Unity’s free Cinemachine plugin.

    Fade Track (UE)

    Fades the entire screen to any solid color.

    Time Dilation Track (UE)

    You can add keyframes for the playback speed of the cinematic.

    Subsequence Track (UE)

    A sequence can contain nested sequences, which allows several artists to work on the same sequence.

    Level Visibility Tack (UE)

    You can control which level (scene) is visible at the time, but the ones you switch between would have to be loaded all the time.

    Data Layer Track (UE)

    Switches the visibility of data layers.

    Media Track (UE)

    Used for showing or controlling the playback of images or videos from the sequence.

    Playable Track (Unity)

    You can create a custom “track type” that affects some variables and behaviors of your choosing, saving you some time compared to animating the normal way with an animation track.

    Visual Effect Control Track (Unity)

    Allows you to play an effect made using the VFX Graph through the timeline (so that it works also when previewing).

    Differences:

    • UE’s Object Binding Track combines Unity’s Activation and Animation tracks, allowing you to adjust the time span of the object being spawned, and the keyframes of its properties, from the same place.
    • UE has many more track types, which I’m sure can be handy in some very specific situations.
    • Unity’s Playable track allows you to craft your own track type by coding, so that you could easily animate the things you want with it from the inspector. I didn’t try it myself though.

    Nested timelines

    Both engines can have separate timelines nested inside a master timeline. In UE, the UI pretty much handles adding some of them by default, but in Unity, you’d need to know to build that hierarchy yourself by adding a Control Track and dragging other Timeline objects containing the separate shots into it.

    Dividing the tracks (especially the cameras) to separate timelines helps with keeping things tidy and saves some time.

    When I was working on the original cinematic in Unity, I only worked with one big master timeline, and ended up having to fix some “flash frames” where an object was enabled or disabled, or a character teleported slightly out of sync with the cut.


    Just an example of a Shots track (UE) and a Control track (Unity), and a nested clip in both engines.

    Both engines also allow you to group tracks in their folders:

    Accessing the keyframes

    In UE, you can simply expand the animation track of an object to reveal the animated properties and their keyframes.

    In Unity, after recording some keyframes, you have to right-click the recorded track and select “Edit in Animation Window”. This opens a separate panel where you can adjust the keyframes by hand.

    Accessing the keyframes to change their timing or properties.

    2. Cameras

    Unreal Engine seems to have more advanced controls for animating cameras out of the box. In Unity, to reach a similar range of options, the free Cinemachine package would be needed. With my simple cinematic, the tools that come with Unity were sufficient, but I’ve also explored the options enabled by Cinemachine in this section.

    Moving the camera

    In both engines, you can move around in the scene by holding the right mouse buttton and using WASD. In Unreal Engine, you can also lock the camera to the scene view to move it. In Unity you’d position the scene view how you want it, select the camera, and choose Align With View from the Edit menu (or use the shortcut Ctrl + Shift + F).

    It’s a small difference, but there’s quite a lot of tweaking and animating camera angles involved with cinematics. I think not having to really think about it in UE lets you focus more on what you’re trying to do.

    From the user experience standpoint, I think there’s also another detail about the viewport that in my opinion makes working in UE smoother: You can select objects directly from the camera view you use to preview your cinematic. In Unity, you’d have to switch between the Scene view you use to select objects, and the Game view you use to preview the cinematic. In many cases, it can be handy to have those be separate views on the screen, but talking about just cinematics, I think you usually want to use all your available resolution on one viewport.

    Camera motion beyond the basics

    If you import the free Cinemachine package for Unity, using its cameras on the Timeline sequence instead of regular ones enables these features (also available on UE without plugins, though without the damping controls):

    • Setting the camera to target a moving object with damping controls
    • Constraining the camera’s position or rotation to a moving transform, also with controls for damping
    • Adding position and rotation transitions between cameras by having a bit of overlap on their timing on the track.
    • Animating a camera along a “dolly track” (which just makes controlling the speed easier than by using regular position keyframes)
    Adding cameras and transitions between them with the Cinemachine package in Unity.
    Using Dolly Camera as it’s called in Cinemachine, or Camera Rig Rail in UE, as well as Look At, in both engines.

    Depth of Field

    In both engines, depth of field can be adjusted either from the camera itself or the post processing volume. Using the latter, there’s more freedom to add more blur than would be physically possible.

    In UE, there’s a handy sampler tool for adjusting the focus distance from the camera, as shown below.

    Adjusting focus discance in UE.

    Judging by the test scene below, at first glance the blur looks great on both engines (in Unity, you’d just have to set the quality to High). However, looking closer, there are differences.

    In the foreground blur of the Unity screenshot below, on the top right, you can see kind of a “halo” of blur around the closest pressure tank, while in Unreal Engine, the border between the blurred object on the foreground and the wall structures in focus behind it looks natural.

    Also, it doesn’t really show in the screenshots very well, but the DOF in Unity has a consistent blur on the far and near area out of focus, never mind how far the objects are from the focus distance. In UE, there’s a natural gradation to the levels of blur depending on the distance.

    3. Animating characters

    Animating characters using existing animation clips works similarly in UE and Unity, allowing you to add compatible clips on the timeline easily. To add transitions, in both engines you simply drag the clips so that they overlap.

    Adding clips and transitions in both engines.

    Layering Animations

    Both Unity and UE allow you to extend “base” animation with additional motion on the timeline, but the methods and what you can achieve with them are a bit different.

    In short, Unity lets you define a part of your character to receive its movement from another animation clip, while UE allows you to add a separate animation layer where you can animate any bones you want with keyframes, without canceling out the motion the bones already had.

    In the video below, I froze the arms of the walking guard in Unity by adding an Override Track with an Avatar Mask restricting it to only affect the arms. On the override track, I placed an animation clip with the carrying pose.

    Adding an Override Track in Unity.

    To wholly replace the motion of some bones in a similar way in UE, you’d have to bake the keyframes and modify the separate keyframe tracks of the bones (at least according to my testing and what I’ve been able to find).

    On the other hand, UE lets you add additive motion, which Unity doesn’t seem to currently support in the Timeline (although additive layers can be used in Animation Controllers, used to animate characters in game).

    Here you can see how I made the walking guard turn its upper body and head while walking in UE, without canceling out the slight bobbing motion of the upper body.

    The “Layered Control Rig” features that make this possible were introduced in UE 5.4. This video (by the product owner of Sequencer) explains those features in depth. Especially the Ground Alignment Layer described from 7:00 onwards (though it would require using an actual UE control rig) and the summary of the new features shown by examples at the end are worth checking out.

    Baking keyframes (UE)

    In UE, you can “bake” the track of animation clips on your timeline, to adjust the keyframes separately by hand.

    For me, the resulting track is called FKControlRig (which is some default name from UE). It bakes the keyframes even for the weirder bones in the bird feet that are definitely not part of a standard humanoid rig.

    I baked the whole sequence of animations on the main character to fix some clipping issues and jerky motions caused by some inaccurately placed transitions. The video below shows some of the process of one of those fixes.

    However, baking the keyframes is a bit of a destructive workflow that makes it difficult to go back to adjust the sequence of animation clips. When doing the bake, the separate clips are stored on a hidden track for safekeeping, but if you want to change their order or timing later on, you’d have to abandon the changes you’ve done on the baked keyframes.

    On the Unity side, baking the keyframes from character animation clips on a single track out isn’t possible without plugins. It might be possible to record all the character movement on a single track with Unity Recorder, but I didn’t try it out, as in most cases, just modifying separate animation clips in something like Blender to achieve what you want would be much handier.

    Root motion

    Root motion means that the animation itself is moving the character from A to B. This can help avoid foot sliding which could easily happen trying to match the movement speed to the animation.

    Both the Sequencer of Unreal Engine and Timeline of Unity have tools to adjust the offsets between root motion clips so that the next animation starts where the previous one ends.

    Since I don’t normally animate with root motion, I put my character through the automatic rigging of Mixamo and downloaded some animation clips with root motion from its selection. With those, I made the same sequence in both engines.

    Using root motion clips In UE

    In UE, there’s an option that tries to match the position and rotation of any bone you choose to what it was at the end of the previous clip to achieve a seamless transition. In a walking or running loop for example, the foot on the ground during the transition would be a good choice. It doesn’t seem to work every time but there are also manual offsets you can add to make corrections.

    Building a sequence of root motion animation clips in UE.
    Using root motion clips in Unity

    In Unity’s timeline you can move and rotate the next clip to start at the right place by hand (or use numeric offsets). Although it doesn’t calculate them automatically, it’s still pretty quick to do.

    Unity is better at handling looping animations with root motion on the timeline. Assuming you have the correct settings for root motion in the animation clip, the next loop starts where the previous one ends without having to add offsets as in UE.

    Building a sequence of root motion animation clips in Unity.

    Animation retargeting

    With Unity’s Humanoid Rig, you can pretty easily set up your rigged character asset so that it can use any humanoid animations in the project (which also have to be set up to use the Humanoid Rig), even if the naming of the bones is different between the rigs.

    In Unreal Engine, instead of having animations be interchangeable through one “basic humanoid” rig defined by the engine, you can retarget specific animation clips to work with a different rig. As a downside, it results in a lot of files, but as an upside, the same retargeting can be done for non-humanoid rigs as well.

    The process of retargeting seems to have many more steps (and pitfalls for a beginner like me) in UE. That’s something that seems to be pretty recurring across different features in the engine.

    Retargeting animations in UE

    From UE 5.4 onwards, there are two ways to do this:

    The new way didn’t work for me, probably due to my character rig being too different from the ones UE recognizes. Using the old way, I got retargeting to work after some trial and error (caused by having exported the character with wrong scale at first), with just one small problem left unsolved. The feet and hands (not part of the retargeted bone chains) seem to be scaled slightly bigger by the animations somehow.

    Retargeting animations of the smaller character to be used by the bigger one in UE, using an IK Rig and IK Retargeter asset for each.
    Retargeting animations in Unity

    In Unity, it’s as easy as setting up both character rigs to be Humanoid rigs, so they can use all Humanoid animations in the project.

    Control Rig (UE)

    Unreal Engine has the possibility of adding a control rig with easy handles for animating the character in engine with IK.

    UE 5.4 introduced something called “Modular Control Rig”, which in the optimal case allows you to add the control rig quickly by drag and drop. In my case, I had to solve some scale problems before I got it to work (and on some exports, the bones twisted in weird ways so it seems to be quite picky about the rig you use). Anyway, even if the setup work takes time, being able to animate a character in engine can definitely come in handy.

    Below are timelapses of animating a backflip for both the UE mannequin and the seagull character. For the seagull, I used an old way of attaching it to the existing control rig of the UE mannequin, not the new Modular Control Rig (since I didn’t know about it at the time).

    I couldn’t animate the bird’s toes since they’re extra bones added to the human skeleton. That would’ve probably required delving into the blueprint (or might’ve been easy to do with the Modular Control Rig).

    Animating with a control rig in UE, using both the UE mannequin and a custom character model.

    Things I like about the UE Control Rig:

    • The IK/FK switch in each limb is a checkbox, and when you switch it on or off in the middle of animating, the pose of the limb stays the same.
    • There’s an IK toggle for the spine, which allows you to animate the whole spine by rotating just the chest bone.
    • Similarly, the head also has an IK toggle, which allows you to move the head without stretching the neck or having to rotate it separately.
    • The IK hint objects of the limbs mostly stay where you need them to be (but their positions can be adjusted and animated at will).
    Trying the same with Unity’s Animation Rigging package

    The Animation Rigging package contains the basic constraints you’d use to set up a control rig in another program, like 2-bone IK and ones that allow you to copy transforms from another bone.

    At first I didn’t write about it, since I thought the constraints only work in Play mode. However, since someone in Reddit reminded me of its existence, I looked into it more and read that they don’t require Play mode if you have the Animation panel open and Preview toggled on.

    I set up a control rig to test it, adding IK for arms and legs with their control bones. However, when I moved the pelvis bone, it quickly snapped back to its original position in the next frame.

    It might be something wrong with my setup but also, I couldn’t find any tutorials on YouTube about making a whole animation for a 3D character from scratch using the constraints, which would indicate that the package isn’t meant for making a full control rig. I think it’s mostly just for adding small tweaks for existing animations, and procedural animations controlled from code.

    4. Lighting

    Direct lighting has less technical complexity to it I guess, so the differences of the engines have mainly to do with the alternative ways of handling Global Illumination, which takes into account indirect lighting as well (light bouncing from lit surfaces).

    Both engines have these options:

    • Baking lightmaps (which helps with GPU performance with the cost of disk space and memory, and the time it takes to bake them)
    • Screen Space Global Illumination / SSGI (a post processing effect that doesn’t take into account the light bounced from objects out of view)

    What sets Unreal Engine ahead is its Lumen lighting and reflection system, which can provide very realistic looking indirect lighting in real time without being overly resource intensive.

    Unity also has an option to bake what’s called in the engine “realtime lightmaps” (using a middleware feature called Enlighten), which are tiny lightmaps that allow the engine to calculate indirect light in real time. This video by UGuruz is great at showing what it does. The geometry of the lightmaps has to be static. Apparently the feature was introduced for Unity already in 2015. They’ve stated that the current Unity 6 is the last version which will support Enlighten. Having only learned about the feature recently, I really like the option to alter the lighting in real time with GI (which even Ray Tracing doesn’t seem to provide very well yet in Unity), so I’ll keep my fingers crossed that there’ll be a replacement in version 7.

    I made a simple test scene (using models from this asset store pack) to test different lighting methods in both engines.

    Unity:

    • SSGI = Screen Space Global Illumination
    • SSAO = Screen Space Ambient Occlusion
    • SSR = Screen Space Reflections

    Note that with Path Tracing, rendering a frame takes minutes, so it’s unsuitable for any real-time use in both engines.

    Someone in Reddit pointed out that I didn’t take into account the effect of Shadow Filtering Quality, which would’ve softened the real-time shadows around the lamps. Here’s a comparison between medium (which I had on in the screenshots above) and high:

    This video by Sakura Rabbit shows a superb looking lighting setup using Adaptive Probe Volume, SSAO and SSGI (probably Ray Traced since the lighting gets updated with a bit of a delay).

    UE:

    I wasn’t able to bake lightmaps in UE, despite spending a few hours trying to problem solve it.

    A video of NVidia’s “Zorah” demo using Lumen.

    Testing realtime global illumination

    I wanted to test how Unity’s “Enlighten” (“realtime lightmaps”) would compare to Lumen in a cavern-like environment with a lot of indirect light. The way Lumen illuminates the surfaces with bounced light looks more even, but I don’t think the difference is night and day. Lumen also has other upsides though, like that the light source can cast soft shadows, and the geometry with the global illumination drawn on it can be moveable, since nothing is baked.

    In the video below, I also included Unity’s Screen Space Global Illumination with Ray Tracing. It takes quite long for it to get adjusted to the lighting changes as you can see.

    Light Types

    UEUnity
    Directional Light
    Point Light
    Spot Light
    Rect Light
    Sky Light
    Directional Light
    Point Light
    Spot Light
    Area Light

    The absence of a Sky Light in Unity is the main difference. In Unity, ambient lighting is added from the Environment tab of the Lighting window, by adding a Volume Profile to its slot to act as the source of the skylight, and generating the lighting. The intensity can then be adjusted from a different place (the Volume component). For years I didn’t know this and stumbled in the dark trying to achieve the appearance of ambient lighting with just the settings in the Volume component and reflection probes.

    Comparing a nighttime and daytime lighting setup

    Real-time volumetric lighting in this kind of a night-time scenario without much indirect light looks pretty similar in both engines.

    I tried to get the Unity version to look as similar as possible to the UE one, so the lighting is a bit different from the video at the top of the post.

    In the daylight screenshots below, the main difference is how much more bounced lighting you can see on the feathers in shadow with UE’s Lumen.

    Screenshots in daytime real time lighting in both engines. The only thing baked is a reflection probe in the Unity screenshot.

    I suppose there are much better setups to show Lumen in all its glory than this scene. However, below is a video of the global illumination switched on, helping you see what Lumen does for the lighting in this case.

    Switching on Lumen’s global illumination
    I tried using baked lighting and Adaptive Probe Volume in Unity, which add some indirect light on the walls, ground and the character.
    Testing a video texture as a light source

    This teaser video of the Megalights feature of UE shows video textures radiating light, and a bunch of Youtube tutorials released in its wake have shown how to pull that off, so I also had to try it out.

    Sure enough, applying a video as a texture in a Rect Light works without much hassle with Lumen, without even having to switch on hardware ray tracing.

    In Unity’s HDRP as well it seems to be possible to link a render texture as the “Cookie” (=a mask) of an Area Light, and play a video on it through the Video Player component. Apparently it would require Path Tracing for it to affect the volumetric fog though.

    A guard watching the trailer of Gust of Wind in both engines.

    5. Testing reflections

    For some reason, Lumen seems to have added some brightly lit spots in the reflection. However, comparing the screen space reflections, in UE they seem to work better with the volumetric fog (even though I had volumetric fog enabled for reflections in Unity as well).

    6. Post processing effects

    Without delving deeper into them, both engines have pretty much the same post processing options available. In UE, there’s a list of effects as expandable foldouts, while in Unity, you’d add what you need in your Volume component as Overrides.

    The color grading options of UE seem to have a vast amount of controls, even though Unity also has more settings than I’d know what to do with, not knowing much about color grading beyond the basics. If you know exactly what you’re after, I suppose it might be easier to get there in UE.

    Though I’ve expanded only one, UE seems to have an infinite number of color wheels one can adjust.

    7. Playing particle effects from timeline

    Particles made with UE’s Niagara system and Unity’s Visual FX (the newer particle system) both show up when previewing the cinematic in the timeline. However, using Unity’s old particle system, you’d have to go to play mode to see the particles.

    8. Rendering the sequence

    In UE, you can render with an export window that’s included out of the box.

    In Unity, you’d have to install the free Recorder package from the Package Manager and add a Recorder track on the Timeline, but after that, it works the same way.

    Once you’ve set everything up, rendering is easy and takes about the same amount of time in both engines. For this cinematic in UHD resolution, it took me about half an hour.

    Rendering transparency

    In both engines, it’s also possible to render on transparent background. Unfortunately with Unity’s Recorder plugin, it seemed to be up to chance if it worked or not though. On most renders, there was either sky or the view of a previous camera filling the background. Changing the range of which frames to render seemed to randomly yield different results.

    On both engines, getting the background transparent required hiding some effects that affected the foreground as well, so if I was to use these renders for something, I’d use only the alpha from the transparent render, and the colors from a normal one.

    9. Working with assets

    Though I have a bit of a bias from having much more experience with Unity, I think importing models, changing materials and just navigating in the program is much faster and more convenient in Unity.

    Imported 3D assets

    UE requires assets to be set up a certain way for the import (and especially reimport) to happen tidily. It seems to be very particular about how you should work in the 3D software you export your models from. Unity has its peculiarities too, but in general it seems a lot less picky about the FBX files you feed it.

    Since most of this is information that most people don’t need, below are some foldouts covering different topics of the workflow between Blender and the game engines.

    Avoiding scale and rotation pitfalls for FBX (Blender -> Engine)

    When exporting from Blender (which just happens to be what I’m familiar with), most of the settings can be handled by saving the export settings to a preset, but there are some annoying steps you have to take for each asset that you export for each engine, to make sure the model is imported in the right scale (1, 1, 1) and rotation (0, 0, 0).

    For Unity, when exported with 0 rotations from Blender, an X-rotation of -90 is added to all the objects at the root level in the FBX hierarchy. I’ve always fixed it by adding a rotation offset in Blender to reverse it, but this tutorial shows that alternatively, there’s a checkbox you can tick in the import settings in Unity to fix it.

    These steps (from this tutorial) seem to work for UE:

    • Set units to metric and Unit Scale to 0.01
    • Scale the model 100 times bigger, and apply scale
    Importing characters and animations

    The way I usually work in Blender is making many animation clips in the same scene, then export it with all the animations. Using FBX format for that kind of export would result in a list of unnamed animations when imported in Unreal Engine.

    FBX file exported “wrong” for UE (with many animations contained in the same file). Renaming the animation clips would be another mistake, since after that, you can’t reimport them.

    As a side note, with GLB format, it seems like the animation clips appear with the correct names in UE as well, even if there are several in the same file (and can be reimported after modifications in the source file).

    According to this video, the correct workflow to get characters from Blender to UE is this:

    1. Export each animation as a separate FBX, and the character rig itself as a separate FBX as well.
    2. Have those FBX files somewhere outside the UE project and import by drag and dropping to the Content Browser.
    Exported and imported correctly in UE, the animation clips appear with the correct names.

    For Unity, you can include as many animation clips in the same FBX file as you want. They can be easily organized and renamed in the import settings.

    In Unity, animation clips in the same FBX file are listed in the import settings.
    Importing environment assets

    In Unreal Engine, an FBX containing many objects can be imported either as combined or with the meshes separated. In the latter case, they’d show as individual files in your Content Browser, without the object hierarchy you had in the FBX.

    However, there’s a plugin named Datasmith, which can be used to import a 3D scene with the hierarchy intact. I followed this tutorial to do that with the hall environment. The imported scene is a group of objects you can just drag and drop to a scene. Beware though, changing it in one scene will change it for all scenes it’s placed in.

    With Unity, the object hierarchy of imported FBX files is preserved without having to use external plugins.

    Morph targets (AKA Shape Keys or BlendShapes)

    Morph targets are a way to save many shapes of the model (without adding or removing vertices), which can be switched on and off smoothly with sliders.

    In Unity, after simply ticking the Import BlendShapes checkbox in the import settings of your character model, they appear as sliders in the Skinned Mesh Renderer. The adjustments you make in the model in the scene, or prefab, will be in effect in the game or cinematic as well (assuming you don’t have animations overriding them).

    BlendShape sliders are very simple to adjust in Unity.

    In UE, you can only preview the effect of the BlendShapes with sliders by opening the skeletal mesh. To have them “stick” in the game or cinematic as well, you’d have to add them as nodes in a blueprint, as explained in this tutorial.

    Previewing morph targets in the “skeletal mesh” asset in UE. The changes made there aren’t applied so that they’d show up in the cinematic.
    The node graphs needed in UE to get the values of two morph targets set in play mode and editor.

    Browsing and making changes

    In my experience, browsing assets and making changes to them is very much easier in Unity, thanks to the inspector panel and preview window. The latter shows an image of the Prefab, model or material, and animations can be played on it. If you select many assets, it gets divided to a tile grid showing the assets. The inspector panel shows the settings of the selected asset and in most cases, you can make changes there without further hassle.

    In Unreal Engine, if you want to make changes to a material for example, you’d have to double-click it open, which doesn’t sound like a big hurdle, but if you’re starting a project, have just imported a bunch of assets and maybe need to do the same change to many materials, it definitely feels to me like there’s some traction preventing me from working as quickly as I could.

    Ease of problem solving

    When I get stuck using Unity, the help is usually a quick Google search away. The Unity community is huge and very active on forums. Pretty much everything has already been asked and answered before.

    In UE, settings affecting the same thing can be found in many places, and whatever you’re trying to do, there can be a setting somewhere else preventing it. In many cases, finding the solution to something simple I was trying to do took hours.

    10. Conclusion

    From the standpoint of visual quality alone, UE is pretty clearly ahead, thanks to Lumen enabling good looking real-time lighting with a pretty good performance, better looking depth of field and reflections. It also has other nice features I didn’t get into, like Nanite and Megalights allowing a huge amount of mesh detail and light sources in scene without much cost on performance, more control over volumetric clouds, MetaHuman, and no doubt a lot that I don’t know about.

    Reading some materials and watching videos when making this blog post, I gathered that there seems to be a consensus that UE is better with animations. I’m not so sure about that myself, at least when it comes to the use cases outside gameplay. If you do the painstaking work of setting things up correctly in UE, you can work more within the engine and rely less on external animation software. However, Unity has some perks that save a lot of time (especially with retargeting animations).

    I think from the standpoint of ease and convenience of working in the engine in general, Unity takes the points in my books. There seems to be less “friction” slowing down the work, and more freedom to set up things the way you want. UE seems to box you into a very specific way of setting things up for them to work, like with character rigs.

    Looking at the cinematic tools alone, I’d say in UE it’s a bit more convenient to animate things though, and it allows you to do more without external plugins.

  • A Look Back on Creating Gust of Wind Intro Cinematic

    A Look Back on Creating Gust of Wind Intro Cinematic

    Background

    I made this intro cinematic for Gust of Wind, a hobby project I’ve been working on with Unity, in early 2023. The game was nearing its early access launch, and with some lengthy dialogs in the first meters of the story, I felt like it needed something to quickly show the player what they can expect the gameplay to be about.

    This post is a mixed bag of notions of my workflow, things I learned, things I probably did wrong, and things I was able to improve when coming back to it a couple of years later.

    Here’s the final cinematic, with music by Olli Oja.

    Storyboard

    I started by painting a storyboard. The main gist of it is that the character we follow was sent on a sabotage mission. He got caught while carrying it out, which is why the main character of the game was sent in his stead on a voyage that the story of the game is set on.

    Setting

    I had recently watched the Michael Mann film Thief from 1981 and really liked the atmosphere of its opening sequence with a combination of early morning lighting and wet streets.

    I aimed for something similar for the lighting, adapted to the setting of the game, a post-apocalyptic world left behind by an Industrial Age civilization.

    For the environment, I chose a long, dilapidated factory hall, since it gives a wide berth for using closer and more distant shots without being very laborious to model. Some barrels, an overhead crane and the train of compressed air carts provide some hiding places for the sneaky character.

    The factory hall needed to be modeled and textured for the cinematic. However, it didn’t require that many pieces. Here are the beams and metal sheet variations I used to piece it together. The textures aren’t great, but good enough to do their job in hazy lighting.

    Lighting

    For the original version of the cinematic, I used real time lights since dynamic shadows were important and baking lighting is quite time-consuming.

    Volumetric fog combined with a directional light radiating at a low angle contributed nicely to a mood of early morning.

    The lighting consists of:

    • Directional light
    • The firelight of the barrel
    • 11 point lights there just to fake indirect lighting, or add some clarity to some shots that would’ve otherwise been too murky.

    Since the shadows, particles and post effects made the scene quite heavy, I later ended up pre-rendering the cutscene by screen recording it in half the normal playing speed, and increased to normal speed in video editing (which provided a better frame rate than playing it in normal speed).

    However, fiddling around with the scene now, almost two years later, I tried baking the indirect lighting to see what it looks like. The video at the top of this post has that baking applied, with some subtle global illumination on the shadows (while the direct lighting is still handled real time).

    Below are some views from the version of the cinematic the early access launched with, compared to new ones with some subtle changes I made now:

    • Baked indirect lighting (It’s hard to distinguish but you can maybe see some some hint of bounced light on the barrels and the walls of the hall in shadow)
    • Slightly better quality in volumetric light (less dithering)
    • The shadows on characters are slightly brighter as well, probably due to having Screen Space Global Illumination disabled in the original version.

    Rain Effects

    For the rain, to save some performance and to get refraction to the droplets, I used this technique instead of having the rain completely consist of particle effects. The position of the rain cylinder is animated in the cinematic so that it’s always centered to the camera currently in use.

    The ripples on the ground are produced by a regular particle effect with a flat box emitter. It would’ve been possible to make a mesh of the needed part of the terrain and use that as the emitter, but in this case the ground is pretty much completely flat (apart from the puddles, which are just dents on the plane added with parallax).

    Working Neatly in Unity’s Timeline

    Arranging things neatly in nested timelines not only saves time, but helps keep the visual result of the sequence tidy as well.

    If you animate everything on a single timeline, there’s likely to be some annoying flickering when the camera angle changes and perhaps a light or a character is meant to get enabled or disabled precisely at the same time. Getting those timings synced is a pain when there’s a big number of tracks.

    Unfortunately I didn’t know you can nest timelines when working on the cinematic. Only when working on a video in UE5 later the same year, I noticed to how handy it is to be able to group cameras and lights in separate shots, and I wondered why you wouldn’t be able to do the same in Unity.

    But as I later found out, you can, by making your shots as separate timelines, and nesting them inside your “master” timeline. Here’s what it could look like:

    Based on a bit of experimentation, this method seems like a much handier way to animate a sequence with several shots. You can also have animated elements with continuous motion in the master timeline, and when you’re modifying one of the nested timelines, that animation shows up with correct timing.

    For comparison, here’s my timeline from the cinematic, looks pretty overwhelming.

    Adding Character Animations

    Most of the character animations are ones used in the game as well, a few of them having some minor alterations for the versions used in this cinematic.

    However, I had to make one new animation for opening the bolt with a wrench. I added a rotating bone for the bolt the wrench is supposed to open and a helper object linked to the bolt bone (so that it rotates with it). The hand IK is constrained to the position and rotation of that helper object when the wrench is turning the nut. Without constraining it this way, it would’ve been impossible to get the wrench positioned just right.

    This is the shot in question. In the gif below, the rotating bone of the nut and the hand target bone linked to it are highlighted:

    As a handy detail about character animations, the Timeline tool lets you add an “Override Track” on an animation track. This would be a separate animation masked to a certain part of the character. A case in point is the enemy character carrying a weapon while walking. The carrying pose is a separate animation clip, which I added on an override track. That override track has an Avatar Mask applied, which defines which body parts the animation affects (in this case, both arms).

    There are plenty of nifty things in Timeline, like how easy it is to add transitions between clips, and triggering functions from the timeline (which I haven’t gotten into). However, I think these were the main notions I had for this small post mortem of the cinematic. Thanks for reading!

  • Exploring the New 3D Features of After Effects

    Exploring the New 3D Features of After Effects

    (Originally posted May 31st 2024)

    For as long as I’ve used it, After Effects has been a program mixing 2D compositions with the option to position layers in 3D space, apply lighting and move a camera in that 3D space.

    Adobe has recently introduced new 3D features to a After Effects Beta. For someone like me, who sometimes animates paintings with parallax and is no stranger to 3D modeling, finding out their extents by testing them in a simple scene seemed like an interesting topic to delve into.

    With the new tools, it’s possible to set up a 3D scene like the video below, directly in AE without plugins. However, there are no modeling tools beyond extruding and beveling shapes and text, so the 3D models would have to be brought from another piece of software.

    Short summary of my notions:

    • The new renderer called Advanced 3D Renderer seemed very fast to render a 3D composition with
    • Importing 3d models with PBR materials and bone animations is possible (though some features missing, like shadows for most lights, definitely reflect that it’s still in Beta)
    • Environment Light with a HDRI map can be used to light your scene, and it can cast nice looking shadows
    • Extruding and beveling Text- and Shape Layers, and curved footage, which were previously available in the Cinema 4D renderer, are also supported in the Advanced 3D Renderer.

    Limitations to keep in mind:

    • Features that don’t work yet in Beta: shadows (except for Environment Lights), motion blur (except if you add Pixel Motion Blur on an Adjustment Layer), depth of field (except if you add it as a 3D Channel Effect, which has its drawbacks), as well as effects on 3D layers and precomps of 3D scenes
    • Ability to do 3D modeling or modify geometry of imported models isn’t in the pipeline according to Adobe’s FAQ. So this won’t be a full substitute to using other 3D packages, but for something like adding a 3D object to a composited sequence, the features will give an option of a faster workflow.

    In this post, I’ll go through my experiments with the new features.

    1. Animated butterfly

    Exporting bone animation

    This is the bone animation, with which I exported the butterfly from Blender.

    After Effects can currently use 3d files of OBJGLTF and GLB format (GLTF and GLB being just ASCII and binary version of the same format). OBJ doesn’t include skinning and animations, so I used GLB. Importing the model in After Effects was as easy as dragging and dropping it in the Composition window, and pressing OK in this prompt.

    If the model has several animations, you can pick the one you want to use from the list shown below:

    As a nifty detail, you can also animate Time Remapping of the imported animation, the same way you would with footage (which is how I animated the butterfly to flap its wings at different speeds throughout the video, using the one looping animation clip)

    Trying to import a Shape Key animation

    Just for the sake of experimentation, I tried to export the animation as a Shape Key* animation. A 3d model can have several Shape Keys, which are different arrangements of the same vertices. You can turn them on and off with a slider to adjust and animate the form of the model. They’re often used for facial expressions or features for example.

    *same as BlendShape or Morph Target in other 3D software

    In After Effects, the Shape Key animation appeared as an animation clip, but the model didn’t move (although checking with import to Blender, it exported correctly).

    2. Grass / placing objects

    I made a very simple grass asset by using a transparent grass texture I painted in Photoshop, on a plane.

    For the ground below the grass patches, I used an image with noise map as a displacement map for a subdivided plane in Blender.

    Positioning the grass patches on a ground plane in After Effects was quite slow, since moving the view is as difficult as it’s always been with the camera tools. In hindsight, it would’ve definitely been handier to combine the grass patches with the ground plane in Blender before exporting.

    With the grass patches placed in After Effects, I noticed that the objects too far from the center of the world coordinates didn’t cast shadows:

    However, I fixed this by linking all 3D layers (that weren’t linked to something else) to a single 3D null layer (placed at the center of the world in 1920, 1080, 0 since my comp was in UHD resolution). That’s a good way to do all-encompassing scaling and positioning changes in AE, even if there’s position or scale animation on some of the layers. Here’s the result, with the grass patches further away as well now casting shadows:

    3. Beveled Text and Shapes

    Using the Advanced Renderer, you can extrude a text layer without having to deal with the slowness of the Cinema 4D renderer. Like previously with the Cinema 4D renderer, you can extrude text and shape layers with three types of bevels. For the material, you can adjust the values shown below, but it can only be of single color, not textured.

    According to the list of enabled features in Composition Settings, there’s something called “Material overrides on text/shape bevels and sides”. I didn’t find out what it means. Apparently you can add a stroke on a beveled object and make it a different color, but the stroke doesn’t have separate material settings, and there seem to be some z-fighting between the stroke mesh and the color fill mesh:

    4. PBR Materials

    Below is the butterfly imported to After Effects, side by side with the Albedo and Opacity maps. Full and partial transparency and double-sided materials seem to work fine. For the textures, the FAQ mentions only PNG and JPG being supported. I suppose that\’s when using OBJ as the format. The albedo texture below (with opacity map on its Alpha channel) is a TGA texture, but when embedded in a GLB file, it worked fine in After Effects.

    According to the Adobe’s FAQ, the properties available for the PBR materials are those included in something called Adobe Standard Materials (ASM). Here are the details on what they are, quite a range of different properties.

    I tried out these common PBR material properties in my test scene:

    • Albedo (colors)
    • Normal
    • Metallic
    • Roughness
    • Opacity (partial as well)
    • Emission

    To show a wider range of maps of the PBR workflow in use, I made a UV mapped version of the 3D text in Blender. After cleaning up the geometry and mapping the letters, I imported the model in Substance Painter and applied different material presets to the different letters. Below you can see the effect of roughness and metallic maps, and how the normal map affects the grains of the wood for example.

    5. Environment Light

    Environment Lights use HDRI (High Dynamic Range Image) maps for image-based lighting. A HDRI means an image with high bit depth, but in the context of 3D graphics, it also means a 360° image of an environment. This article by Mark Segasby of lightmap.co.uk describes it in detail.

    You can add a light, set it as an environment light, and apply any HDRI image (in your composition as a hidden layer). You can also tick on shadows for it, which can provide a nice looking occlusion that resembles Ray Tracing. My example scene isn’t the best setup to show it in, but the shadow below dragon on this video by SternFX gives a better idea of what it can look like.

    Here you can see the effect of different HDRIs in my example scene. All the HDRI maps are from Poly Haven.

    You can also make a 3D plane which only accepts shadows but is otherwise transparent, as described here.

    3D Channel Effects

    The effects in the 3D Channel section have existed before but I’ve never really noticed them, so testing them with the new features was a great excuse to try them out.

    The workflow with these effects is making a precomp of your 3D scene, and then applying the effect to the precomp.

    The fog effect lets you mask the fog with a separate black and white texture or precomp (which goes into “Gradient Layer” slot).

    Here are three of the other 3D channel effects in use (the remaining effects are Cryptomatte, EXtractoR, ID Matte and Identifier, which I didn\’t test out this time).

    The frame shown below has Fog and Depth Of Field applied.

    However, you can see the edges of the screen being a bit messy, and that there’s some blur around the wings of the butterfly. Using the Depth of Field option in the camera instead (which is currently unavailable in the beta) wouldn’t cause these problems.

    Also, as mentioned in the Composition Settings, effects on collapsed 3D precomp layers are currently not working (I also noticed this when trying to render the composition with the Fog and DoF effects through Media Encoder, fog was invisible and DoF had blur everywhere).

    Testing with more complex meshes

    I tried importing meshes that together amount to 1 million tris. This had no noticeable impact on the time it takes to update the frame in full resolution in preview (it was about 5 seconds both before and after).

    I also added a chrome sphere by setting its material’s Metallic value to full and Roughness to 0. As you can see, it reflects the environment light but not the objects around it (since there’s no Ray Tracing).

    Note that the blue sky on the background is not the HDRI map but just a solid with a gradient. CC Environment effect appeared way too dark, so I found no way to use the HDRI as a visible background.

    Applying effects to a 3D object

    As Adobe’s FAQ mentions, it’s currently not possible to apply effects to 3D layers directly, but you can add an adjustment layer on top of everything and select the 3D layer as its Track Matte layer. I tried recoloring the butterfly this way.

    However, if there are other objects in front of the object you want to apply the effect to, you’d have to mask them out somehow.

    Conclusion

    Here are the new 3D features in a nutshell (introduced in 2023-2024):

    • Advanced 3D Renderer, which renders the 3D content very fast. Rendering my scene in UHD resolution without particles and adjustment layers took only 6 minutes on my system.
    • Image-based environment lights and shadows (HDRI Environments)
    • Models with PBR Materials and possibility to use bone animations
    • Curved footage layers and extruding and beveling Shape- and Text Layers now work in the Advanced 3D Renderer as well (not just Cinema 4D)

    3D formats that can be imported (according to the FAQ, they’re working on adding more):

    • OBJ: Contains only the static 3D mesh, but can reference .MTL files that contain materials and textures. Animations, skinning and Shape Keys can’t be included.
    • GLTF: can include meshes, bones, animations, Shape Keys, materials (PBR) and textures
    • GLB: binary version of GLTF (GLTF being ASCII), supports the same features

    Still not included in the current Beta:

    In addition to the points listed above, here are some problems I bumped to:

    • In my test scene, only objects close enough to the world center cast shadows from the Environment Light
    • If you have an Adjustment layer between two 3D layers, those two 3D layers don’t clip with each other
    • Shape Key animations don’t get imported

    Overall impression:

    These 3D features don’t seem like an all-encompassing replacement for animating the more complex sequences in other 3D programs, but for simpler compositions involving 3d models (like flying and rotating coins, or a character running across a text title, or something), having it be possible to do without plugins and other programs will smoothen the workflow nicely. The HDRI environments casting shadows also seems like a low-weight solution for achieving nice lighting, although the missing shadows for regular kinds of lights is currently a big drawback with the Beta version.

  • Fan Trailer – Doom (2016)

    Fan Trailer – Doom (2016)

    (Originally posted in September 2020)

    To continue my streak of fan trailers, I wanted to do something faster and more energetic this time, edited to a metal soundtrack. Since I had Doom (2016) in my Steam library, that ended up as the subject. The game has moments with a really good flow of movement in the heat of battle, and I wanted to try reflect that in the editing on my fan trailer.

    After playing through the game and capturing the best parts with Nvidia Shadowplay, I started the process by extracting the dialogue audio from the game and finding the clips with some dialogue lines that might be suitable for a trailer. From those, I pieced together a voice over that served as a basis for the structure.

    Next I went through the game’s soundtrack by Mick Gordon, in search of a clip that would structurally and mood-wise work well in a trailer. I ended up using a part of the track “Flesh & Metal”, which had a good variety of calm and intense moments and escalation. It provided a great basis to sync moments of impact in the heated battle footage to.

    After planning the structure with voice over and music, I took the most interesting looking moments in the gameplay clips and placed them on the timeline. I captured new gameplay clips where necessary to fill out the escalation of the beginning with ominous pentagrams, hell-landscapes and other things that more or less fit the voice over.

    For the latter, action-focused half of the trailer, I marked impact moments with impacts in the music. I tried to use as many match cuts as possible with continuity of motion in the edit – the end of one clip has the same direction of movement as the beginning of next clip. This worked out in some cuts better than others.

    Once everything else was finished, I adjusted the gameplay audio with volume keyframes and balanced out the audio tracks to sound good with compression so that all parts, music, voice over and gameplay can be heard.

  • Fan Trailer – Grow Home

    Fan Trailer – Grow Home

    (Originally posted September 2020)

    I thought I’d make another fan trailer to keep myself occupied in quiet times between freelance work. This time the subject turned out to be Grow Home by Ubisoft Reflections.

    It’s been one of my favorite games from the first time I played it through a few years ago. There are many aspects of it that I really like, from the style of graphics and sense of huge scale, to the way the character moves (especially when climbing), with pad triggers “gripping” the cliff face with the corresponding hand, but otherwise the main character Bud being totally at the mercy of physics. I also like the idea of growing something huge, the form of which the player is in control over.

    Here’s how the fan trailer turned out. For music, I used the game’s menu music by Lewis Griffin.

  • Fan Trailer – Bioshock Infinite

    Fan Trailer – Bioshock Infinite

    (Originally posted in September 2020)

    I’ve had Bioshock Infinite in my Steam library for a long while and thought that might be a good subject for a fan trailer. Up to this point, something about the cartoony style had convinced me that the game is not my cup of tea, but I ended up being happy I gave it a whirl. I really liked how the story unraveled, and the visuals, even still in 2020, are consistantly great.

    Here’s my take on a fan trailer:

    I captured the gameplay with Nvidia Shadowplay, which seems to me by far the handiest way, since saving 15s clips with a shortcut after something interesting happens is makes the footage easier to manage than capturing continuously.

    After the credits had rolled, I listened through all of both Elizabeth’s and Booker’s lines to find suitable comments for a trailer voice over. I also listened through the game’s soundtrack by Garry Schyman and after realizing there’s everything I need for a trailer there, I combined suitable clips to form some kind of a rough audio structure.

    Rummaging around in the multitude of video clips, I took the best ones on my timeline and assembled them into a whole skeleton of a trailer. When the editing felt good to me, I polished the audio, toning up some of the gameplay and adding some sound effects to heighten the dramatic effect of some moments.

    Assembling the clips and editing this trailer took me about 24 hours (not counting the playing time, for which Steam clocked 15 hours).

  • Process of a Painted Cinematic

    Process of a Painted Cinematic

    (Originally posted in January, 2020)

    I made this video as a cutscene for a game project I’m planning, but also to get a portfolio sample that shows my process. All my videos consisting entirely of paintings so far have been hobby-based and I haven’t worried too much about the hours, but for this one I kept book of them and tried to work as efficiently as possible.

    Script and Thumbnails

    In this cutscene, I wanted to show the player’s ship crew attacking an enemy ship at night, given the element of surprise by a thick fog. To add an extra twist, I wanted the crew to attack dramatically from above, something like the ”Pole Cats” in Mad Max: Fury Road

    In the ship design that I had drawn earlier, there’s a derrick crane (something that has been used in smaller size in steam ships, fishing boats and sail ships to load cargo in the hull) modified to also serve as a sort of siege ladder.

    Based on this rough idea of what should happen in the cinematic, I thought about the clearest way to tell the story interestingly enough with as few scenes as possible without resorting to narration. The result was this first draft of the script with thumbnails, shown below.

    Scene numberDuration (s)DescriptionThumbnails
    18Close-up of gauges and pipes, glistening in moonlight. Particles of foam or frost are floating about. Strong depth of field effect. There’s some blurry movement on the background
    28
    Camera focuses on the background. A man operating crank attached to the mast. Cogs of the mechanism are turning on the background.
    38Camera is now on the other side of the mast, still about waist-high. The mast is turning so that the crane will face on the right (two men are climbed on the mast). Camera moves backwards and reveals a group of men looking grimly to the right, with makeshift weapons in hand.
    44Fade to the men climbed atop the crane. Camera zooms into the face grille of the other one’s oven helmet. Maybe drops of sweat seeping through the holes
    52A quick, blurry camera move to captain giving a hand signal
    62A man kicks a lever in the winch.
    72
    Camera is between the hulls of the two ships (the enemy ship shown now for the first time). The beam of the crane is dropping in free fall.
    82Camera is behind the men on the crane, as the beam slams into the railing of the other ship and locks in place.
    98The guy with face grille standing on the deck of the enemy ship. Camera starts with a close up on him, and zooms out evenly. Another guy is jumping on deck, and a few others are running across the bridge now formed by the beam. As the camera gets further, it reveals the legs of some barbarians (tattoos, fur leggings, cuts, bruises, etc)

    First mockup video and finding stock music

    When I had it planned out what should be happening on the scenes visually, I started working in After Effects, importing the thumbnails and animating the rough camera movement, just to get an idea of how the length of the shots and general flow between the shots will work when I add them together.

    I also started to focus on finding suitable music. What I originally had in mind initially was something more classical, but after numerous hours of searching I ended up using this metal track because it has a strong mood of anticipation what’s going to happen next, and clear points of ”change” (namely the drop in 25-27s), where you can place something interesting happening in the video.

    ”Metal Cinematic Trailer” by OttoMusic

    After making the decision, I cut the mockup video to fit the structure of the music (also adding a few short scenes to make the pacing reflect the build-up). Here’s what I had at the end of day 4. Although I did end up changing the viewing angles of a couple of shots afterwards, this pretty much served as the skeleton of the video until the finish line.

    Scene 3

    When I had the structure of the video laid out, I started working on one of the scenes. I wanted to start with something that wasn\’t too easy or too laborous, so that it would give me some kind of an idea about how much time the rest of the scenes would take on average. Since scene 1 would be a whole lot of work because of the longer length (requiring more things happening) and 3d elements I’d have to add, I started with scene 3.

    Here’s the progress of the painting from sketch to what I had at the end of day 7. I did some rough color experiments as well, but ended up thinking it’s best to go with black and white, for both efficiency and because of the night time there wouldn’t be much color anyway. Later I ended up doing some changes on the armor design and lighting.

    Scene 4

    In this scene, for the lighting and the look of waterdrops on skin, I referenced the ”tears in rain” scene in Blade Runner. The lighting was very suitable since it was set on early dawn wich fits my setting, and is something that I could get pretty close to with just color adjustments on a black and white painting.

    Below are progress shots of the painting side, as well as the animated scene at the point where I had it at the end of day 9. It acted as a model for color adjustments to use on the rest of the scenes as well.

    Scene 1

    Now that I had a clear idea about the painting and coloring style I was going to use, I moved on to the laborous scene 1.

    In an attempt to make the viewer curious, I made a strong depth of field effect, revealing the background little by little. The turning mechanism of the mast isn’t realistic, but I wanted to have something showing gears in movement as technology from the turn of the 20th century is an important part of the game setting.

    To get the gears to turn in 3d space, I had to make them as 3d models. I used Blender to model them and Element 3d plugin for After Effects to place them as convincingly as I could in relation to the painted 2d layers. There’s a bit of a 90s CG feel on the textured 3d objects, which I could’ve avoided by spending time in painting actual mapped textures on them, but I tried to cut my losses in terms of time spent on a very little thing. Later in polish phase, I fixed some mistakes and painted some more interesting texture on the floor.

    Scene 11

    Most of the rest of the scenes had nothing really special about their process but in scene 11 I did a new thing I haven’t tried before. Instead of using 3d models of the figures and mast with camera rotating around them, I figured it would be much less time-consuming and look more interesting if I painted the frames. Despite the low frame rate due to not wanting to spend ages on this, I’m pretty happy with how it turned out.

    Last finishes

    Alongside doing the scenes, I wrote down a list about some smaller changes I need to do at the end to fix the consistency between scenes (for example the placement of figures, their outfits and weapons), and to add detail in places that looked too empty. Here you can get a picture of how those changes added some life to the scenes.

    Here’s a breakdown of how long each phase in the project ended up taking:

    Script and thumbnails2 days
    First video mockup and finding stock music2 days
    Scene 15 days (with all the 3D elements taking time as well)https://youtu.be/1FUKHOkjmF0
    Scene 23 dayshttps://youtu.be/q8H6wm0bOpY
    Scene 32 dayshttps://youtu.be/BLgHQxRco3k
    Scene 42 dayshttps://youtu.be/zuYMzFzoPb4
    Scene 5, 131 dayhttps://youtu.be/jdhE6H4ltK4
    https://youtu.be/gQ1_yfVvuP8
    Scene 61 dayhttps://youtu.be/E3SIEYYYwPM
    Scene 7, 10
    (uses elements from scene 1)
    1 dayhttps://youtu.be/VmvcqgjipSU
    Scene 8, 122 dayshttps://youtu.be/UEDfRpJRKfo
    https://youtu.be/8zsfKtd6J10
    Scene 91 dayhttps://youtu.be/_wAKlIl5xc8
    Scene 112 dayshttps://youtu.be/b4PeEmt4rZs
    Scene 143 dayshttps://youtu.be/agJ93n3Spt0
    Last tweaks1 day
    All in all28 days
    (8 hours per day on average)
    Started in 19.11.2019Finished on 30.12.2019