Category Archives: Term 3

Collab Project Review

The cooperation project did not run smoothly in general, but it is very valuable for learning.

I will summarize here some of the problems encountered in the cooperation process.

Video Specs

Through communication with another team, I found that our team did not emphasize the use of the same and lossless video codec in the cooperation for the final editing. Another group use Apple ProRes 422 for output during process and H.264 for the final movie.

At the same time, some team members used the wrong frame rate and size format when editing the project file, resulting in the lack of detail quality.


The most important thing is that we did not use high-quality original footage for compositing, which caused a serious decline in video quality and increased the difficulty of rotoscoping, tracking and compositing.

Edited footage as original input for compositing

Live shooting

Another serious mistake was that the green screen was not used for shooting, which made rotoscoping difficult in post-production. When shooting, we did not consider the lighting of the scene thoroughly, which caused some avoidable problems and had to be fixed in post-production.

For example, the hard light that needs to be removed in the scene, we did not block it, resulting in unwanted light effects, and greatly affected the faces of the characters.

Coherence in design

Although we prepared storyboards, script and mood boards for the pre-production, and conducted on-site inspections of various shooting scenes, in actual shooting, we did not go exactly as planned. Looking back at the preliminary preparations, we seem to have done a lot but in fact did not make a thorough arrangement. At the same time, we did not develop a coherent style during post-production. The styles of the footage produced by each compositor are independent of each other.

Lack of quality management, unclear roles and responsibilities

At the beginning, we designated everyone’s work but did not clearly divide the work, and did not conduct inspections at each stage of the work, resulting in the handover not going smoothly. For example, a member of our group did the wrong cleaning plate but her progress was not checked, which caused the compositor to do it all over again; another example, the work done by roto artists and 3D tracker did not reach the standards, leading to a delay in progress. It is incovenient to fix the roto data given by others because we can not see the roto key frames in our own nuke, in this case I need to check each frame to ensure if the key frame causes the problem. It may take less time to roto all over again. So I think the roto artists should check the quality of the work done and fix the issues before uploading it, however, the coordinator thought it was part of composer’s job.

From my understanding, good collaboration means smooth workflow, but everyone in our group just texted messages in the chat, but no one checked the quality at each stage. This leads us to waste a lot of time doing repetitive work, especially when the project is reaching a deadline, it took time to deal with work that should have been completed at the beginning. At the same time, it also leads to conflicts and disputes in cooperation.

But finally we reached agreement and finished the work. Disagreements are inevitable in teamwork, because everyone is making efforts to achieve better results in their own ways. It is truly valuable to learn how to be more effective and professional through this collab project also in the future.

Personal Project Review

Through this personal project, I wasted a lot of time entangled in the pre-design phase, and thus realized my lack of knowledge in design. In the future, I will be more inclined to complete an overall project through cooperation with someone more professional in pre-production.

However, I also learned a lot of knowledge that I didn’t know before, such as color and painting. At the same time, I also tried some functions in Maya, such as dynamic curves, blendshape, rigging and skinning, etc. It is important to understand the connections between the nodes of Maya and keep them organized. I still tried some new functions in maya using their examples, and I will explore more next term.

For non-photorealistic rendering, I learned about the performance of lighting and shading and some methods to achieve non-photorealistic rendering in Maya. But these are just the beginning. Next, I will learn more fundamental knowledge of non-photorealistic rendering, learn and compare examples of creating non-realistic rendering in different 3D software or engines, such as Unreal Engine, Blender and Houdini.

NPR settings

Art direction

Based on previous research, I decided to use a more cartoonish and hand-painted style for the project. I want to keep the roughness of the canvas at a low level but visible, and include outlines.

I would not use too much irregular watercolor gradient effects, especially on characters. I would use 3 types of shading: highlight, halftone and shadows, but I will blur the boundaries between them. It will be more like alcohol-based marker than traditional watercolor.

ref
ref

Hence I chose to use Fra、yed stylization and amended some attributes here.

https://artineering.io/styles/frayed/#style-attributes

Canvas

There are many canvas texture to choose. To prevent shower door effect, we can enable canvas advection with vfx control in sfx material settings. Since I used minimum canvas scale, shower door effects could be ignored without canvas advection.

Shadows

Ambient Occlusion

Ambient Occlusion (AO) darkens the image in areas that are hard to reach for the ambient light due to the local shape of the geometry (e.g. concavities, crevices, holes). Note that this effect depends only on the geometry (and the viewpoint, to a lesser extent), and not on the lights present in the scene.

With the Watercolor and Frayed style, the AO term modulates the pigment density, resulting in darker colors in occluded areas. It just affect the contrast of the whole scene generally.

Defalut lighting

NPR material settings

For this cake I just enabled highlight and specularity to make it look better.

Painterly shading

Light color and shade color can be changed for different materials, so it is better to separate materials for different parts. I tend to choose purple as shade color for yellow objects, choosing black as shade color might look a little boring.

Outlines

Manual setting
Manual setting

Outlines can be created by using a larger mesh with reversed normals.(single sided)

It also can be created directly in Maya Software renderer by assigning outlines under toon menu. Attributes can be adjusted to achieve different results.

https://download.autodesk.com/us/maya/2009help/index.html?url=Toon__Assign_Outline.htm,topicNumber=d0e555447

However, with MNPRX plugin, outlines can be created automatically and can be customised to certain areas and darkened or lightened using paint tools. It is better to paint vertices before skinning. Once the skinning is completed and then use paint tools, the vertex color node will be retained in the history, resulting in slower real-time rendering. Keying paint effects will also lead to slow rendering.

Hair simulation in maya

For character’s hair animation, I tried to use nHair system in maya to automatically generate the movements and collisions. It can be used in many applications, not just for hair.

https://knowledge.autodesk.com/support/maya/learn-explore/caas/CloudHelp/cloudhelp/2019/ENU/Maya-CharEffEnvBuild/files/GUID-70A85F9C-9FC0-49FD-A535-76D379D2C9BC-htm.html

https://knowledge.autodesk.com/support/maya/learn-explore/caas/CloudHelp/cloudhelp/2019/ENU/Maya-CharEffEnvBuild/files/GUID-420D738B-B779-4D37-92F5-313F1B0454E9-htm.html#GUID-420D738B-B779-4D37-92F5-313F1B0454E9

https://knowledge.autodesk.com/support/maya/learn-explore/caas/CloudHelp/cloudhelp/2019/ENU/Maya-CharEffEnvBuild/files/GUID-39BDA542-9C26-48E3-B7A1-FC3983FFBECE-htm.html#GUID-39BDA542-9C26-48E3-B7A1-FC3983FFBECE

We have two types of curves here: input and output. Input curves simply defines the original shape, output controls the dynamics.

Firstly I rigged the hair and parent it to the head joint. Draw an EP curve along the skeleton.

Then delete the history and bind skin in hierarchy mode. Paint the weight if necessary. Hide the mesh.

Make the curve dynamic under nHair. The follicles and output curves would be created. Make the follicle and the corresponding output curve under the same hierarchy. Create the IK bone.

In the follicleShape, set Point Lock to Base.

To play the simulation, select nSolver > Interactive Playback.

To stabilize the dynamic curve’s behavior, adjust the Bend Resistance attribute value in the Dynamic Properties section of the hairSystemShape of the Attribute Editor.

To make the collision between any mesh, just create passive collider under nCloth.

In order to keep hair shape not change too much, I set start curve attract here to help.

Old version hair simulation dynamics tutorials :

http://k9port.blogspot.com/2013/11/dynamic-hair-tutorial-part-1-setting-it.html

nHair system introduction:

https://www.linkedin.com/learning/maya-nhair/controlling-collisions?u=57077561

Scene 5 CG Design and Compositing

Design

Since our story belongs to the category of magic and suspense, according to the mood board we made before, I referenced some concept maps of science fiction games and simply composited them with the original scenes. Due to the lighting problem of the original footage(strong hard light and outdoor light), I finally decided to change the shooting room into a semi-open scene to weaken it.

Since setting the scene on the ground is more restrictive, the style of the room will affect the composite background to a greater extent, and the shooting angle limits the field of view, so I finally decided to make the shooting room float in the air.

I tried the HDRI that I can find on the Internet as the background, but the style is not ideal. Because the specular reflection in the scene can be ignored, I decided to use the pictures with buildings as the distant view.

Background Image

For the last shot, referring to the script, I designed it as two characters walking towards a secret room.

Add a CG light source

Since the strong light source was not removed during shooting, the result of roto was not satisfactory. After discussion, our teammate thought that adding a light source will help. The choice of the light source need to make sense in such environment. It is better to use a static CG glowing stuff, because the moving light source will affect the shadows in the original scene. Due to the complexity of the original scene light and shadow and the architectural structure, it will take time to render the shadows with moving light sources. However, this stationary light source also need to satisfy the layout of the scene so that it is not abrupt.

The use of CG light source is to fix artifacts that cannot be solved in rotoscoping for both shot 8 and shot 9. When choosing a static light, I found it can not be placed at correct positions. Finally I chose to use a sci-fi robot.

Because it would take much time to render shadow, I decided to create it in nuke using original robot sequence with some transform nodes, grade and blur alpha channel, keying at certain frames to match the movement of the robot.

CG robot light shadow

I also used flare node at the end to match the lighting effect from the CG light.

3D Tracking

The tracking information given by other people was useless because all the points look correct in 2D viewport but make no sense in 3D viewport. I tried many times by using grade nodes and adjusting settings in camera tracker but no help at all. However, the camera rotation path looked ok. So I finally placed the cards with a camera to match the scene.

shot 9 – 3D tracking

Animation and Rendering

The following CG assets are downloaded from sketchfab,

Door: https://sketchfab.com/3d-models/rr-door-aed85da0529f419d89dd724cdd883fb5

Floating stones: https://sketchfab.com/3d-models/european-and-american-game-scencemagic-portal-31182bdc05dd40ddbc7f284297e8f225

Sci-fi camera: https://sketchfab.com/3d-models/sci-fi-camera-drone-3dbf07385d1b471a96dce6de6cba97d6

I just imported the cards and camera from nuke to maya so that I could adjust the position of the assets in the scene. I also made animation or did adjustments to the existing animation to match the movement in the shot.

I used a spherical light source with image background to illuminate close-up objects in maya.

In order to improve the rendering speed, I set the mesh to blocker to block the invisible part of the CG after compositing. At the same time, use arealights and blocker to create corresponding shadow effects for indoor CG objects.

shot8
shot9

After testing, arnold GPU rendering and CPU rendering under the same settings, although it will be several times faster, but in this scene will cause fireflies, so I tried, and finally found that enable adaptive sampling and set max to 10 is more efficient In practice, when the default max is 20, the speed is almost the same as the CPU rendering.

Compositing

Given roto data on characters from my group mates, I started compositing for shot 8 and 9.

Image background

Because the image background is a still image, it works in shot 8 but need to be rotating in shot 9 as the camera rotates. So I projected the mirrored image to a cylinder to scanrender.

Roto on the archs

For shot 8 it is easy because the camera does not move, so I just simply roto out the shape.

For shot 9 it is difficult to make it stick stably so I tried to roto at single frame and project them using camera on the cards with the help of shuffle nodes, because alpha channel cannot be scanrendered. However, it does not fit very perfectly all the time and there is always a shift, no matter how I adjusted the cards, it could not be solved. I tried to use transform and cornerpin nodes to adjust these but it became more complicated.

In this case, I just used them for certain frames, and for frames before 105, I rotoed it frame by frame. It was difficult to get a perfect result because of blurred pixels on the low-contrast edges.

Portal

The portal we initially used has some problems to match the scene, in this case I tried to use a more transparent one to replace.

The new portal is a downloaded resource from Doctor Strange, I graded and transformed it to match the scene.

Merge

Because CG assets did not match the arch very perfectly, so I need to use rotopaint to fix some flaws.

Character creation

Design

For this project I want to design a cute character, so I used chibi’s body shape (2 heads).

Color palette

For the colorof this character, I want to keep saturation and value(brightness) at lower level. I choose purple pink and yellow, which give peaceful, tender and innocent feelings.

Modelling and Texturing

Since I couldn’t find suitable resources (most of the body proportions were incorrect), I referred to the three views in 2D to model the character.

The lighting effects under default setting looks horrible, it need to be set the light used and shadows to observe the desired result. Also, depth map shadows should be enabled to see the shadows.

I unwraped UV for face and painted it in Sai2(digital painting software).

Rigging and skinning

I used Mixamo to rig the character so that I can use the animation from mixamo directly. However, the imported character should be cleaned and without strange shapes. Because my character is too small, the result was not as expected, I need to paint it again to fix bugs in Maya.

When skinning is completed, it is best to export it for future use, or save in another script to copy from.

Blendshape and face animation

I used blendshape here to make face animation, it also can be applied in game engine.

When making eye animation, I grouped(ctrl+G) eyes, eyelashes and face to create blend shape.

When making mouth animation, I grouped face,teeth and tongue to create blend shapes.

They can be moved out of the group (or moved into another group)after finishing current morph deformation.

However, the base mesh(group and its hierarchy) is fixed for each blend shape.

When creating blendshape, I set to world because local will ignore transformation, I need translation infomation here so I keep it.

The blend shape target can be rebuild so the mesh could be created in case we need it again.

The deformation order should be pre-deformation (before skinning),because it is has binded skin.

I drawed some images with alpha channel to create 2D appearances here.

It is better to separate two meshes with the same number of vertices (eyelashes) when making blend shapes, because it will exchange the their targets sometimes.

In order to animate eyes, I rigged a bone to control both eyes, the joint should be on the intersection of their vertical bisectors so that the eyes can rotate in the correct region. I also limited the translation here.

face animation test

Non-photorealistic rendering research

Shading

Blender Tutorial - Physically Based Rendering (PBR) and the Principled Node  - YouTube
PBR is photorealistic rendering

In traditional painting, there are six shaded regions.

Outlines are the one of the most important features in cel painting. And shading is simplified to 2 types: halftone and shadow. Hence the boundary between the light and shadow is another feature of cel painting. Nowadays, highlight is always used in cel animation through adding special effects.

Animation Cel Painting by Christopher DeStefano - YouTube

In digital(modern) paintings, different techniques for shading are used together for artists to create diverse art styles.

阴阳师:CP吸屁2018年设计稿曝光,这么好看的御馔津可惜了
原神:1.3版本實裝,魈的魅力太大,全球玩家心甘情願為他買單- 街拍漂亮小姐姐
Image

For non-photorealistic rendering, the performance of lighting is the most important feature, and it is also inseparable from the topology of the 3D mesh.


Another thing worth noting is that the 2D animation will show omitted topology details (such as the highlights on the hair) through light and shadow, but the highlights rendered by toon shading renderer will expose the real structure of the 3D topology. When applied to real-time toon shading, artists often fix the highlight on the material by texturing. This method solves the problem to some extent, but it seems strange when the light changes in the scene.

Painting media

Some details should be reduced or altered depending on strokes and techniques.

Watercolor

Josepth Zbukvic
https://www.bilibili.com/video/BV1C4411S78E


Watercolor gradients are mostly used in the creation of shadows.

Gouache(opaque watercolor)

Gouache is similar to watercolor in that it can be re-wetted and dries to a matte finish, and the paint can become infused into its paper support. It is similar to acrylic or oil paints in that it is normally used in an opaque painting style and it can form a superficial layer.

Watermedia Mystery: Really, What is Gouache? - Creative Catalyst Productions

Opaque watercolors look thick(strong coverage), wheras watercolors are more transparent(fluidity).

MY NEIGHBOR TOTORO  1988. 50th Street Filmscourtesy Everett Collection

Watercolor is more suitable for illustrations or still scene(background). Opaque watercolor is more suitable for animated character and front scene. Opaque watercolor and watercolor are usually mixed, where watercolor used as background and opaque watercolor for the details.


Alcohol-based markers

Alcohol markers are different because the ink is permanent. However, you can still layer the colors to create striking overlays. For example, gray overlays will dull colors down while yellow will brighten them up.

https://www.bilibili.com/video/BV1mJ411k7qT?from=search&seid=12035595433229581903

To me, markers style reminds me of childhood, and traditional watercolors look more romantic.

Crayon

https://blendernpr.org/crayon-charcoal-bge-test-train-face-boy/

Cel shading(toon shading)

Cel shading, as a 3D animation and rendering techniques, have regularly been used by Ghibli since the production of Princess Mononoke (1997) – the first feature to benefit from the distribution deal – to achieve complex effects that would be impossible using traditional pen-and-ink methods. Those CG shots were seamlessly blended with the hand-drawn images in production.

Some paintings of other styles that usually appear in the form of still frames can also be made into animation, but the cost would be high, such as this fully painted feature film which took six years to produce: Loving Vincent.

Motion design and frame rate

I think one of the problems when using non-photorealistic rendering is that the animation looks too fluid and realistic than it should be, especailly for the use in limited animation. Fully animated films are animated at 24 frames per second, wheras limited animation uses 8-12 drawings per second, thereby limiting the fluidity of the animation.

For those who are used to watching limited 2D animation, watching the 3D animation of toon shading will give them some strange feelings, because the movement of the characters is too realistic, leading to the movement of non-photorealistic shadow look too realistic . The human eye is usually very sensitive, such a realistic movement reveals the topology used in 3D animation.

2D hair fabric motion design
3D
3D