Convert RGB Image to Grayscale
A common one in use is:
Y = 0.3*R + 0.59*G + 0.11*B
RGB & LAB
https://rawpedia.rawtherapee.com/RGB_and_Lab
Color Temperature

A common one in use is:
Y = 0.3*R + 0.59*G + 0.11*B
RGB & LAB
https://rawpedia.rawtherapee.com/RGB_and_Lab
rubber hose animation
1.Continue to explore non-photorealistic rendering for stylized vfx
2.Generation of real-time visual effects in stylised 3D environment based on tonal cycle of music
For the field of music visualization, our most common design is motion graphics, which is more abstract. For this reason, I want to explore a visual effect design that can interact with characters when music is played, based on tonal cycle of music to match the tone and atmosphere, (with the help of facial capture or motion capture).
Collab full movie
I designed CG environment for the last scene and composited the last two shots. Roto on two characters was done by my teammates and then fixed by me.
The cooperation project did not run smoothly in general, but it is very valuable for learning.
I will summarize here some of the problems encountered in the cooperation process.
Through communication with another team, I found that our team did not emphasize the use of the same and lossless video codec in the cooperation for the final editing. Another group use Apple ProRes 422 for output during process and H.264 for the final movie.
At the same time, some team members used the wrong frame rate and size format when editing the project file, resulting in the lack of detail quality.
The most important thing is that we did not use high-quality original footage for compositing, which caused a serious decline in video quality and increased the difficulty of rotoscoping, tracking and compositing.
Another serious mistake was that the green screen was not used for shooting, which made rotoscoping difficult in post-production. When shooting, we did not consider the lighting of the scene thoroughly, which caused some avoidable problems and had to be fixed in post-production.
For example, the hard light that needs to be removed in the scene, we did not block it, resulting in unwanted light effects, and greatly affected the faces of the characters.
Although we prepared storyboards, script and mood boards for the pre-production, and conducted on-site inspections of various shooting scenes, in actual shooting, we did not go exactly as planned. Looking back at the preliminary preparations, we seem to have done a lot but in fact did not make a thorough arrangement. At the same time, we did not develop a coherent style during post-production. The styles of the footage produced by each compositor are independent of each other.
At the beginning, we designated everyone’s work but did not clearly divide the work, and did not conduct inspections at each stage of the work, resulting in the handover not going smoothly. For example, a member of our group did the wrong cleaning plate but her progress was not checked, which caused the compositor to do it all over again; another example, the work done by roto artists and 3D tracker did not reach the standards, leading to a delay in progress. It is incovenient to fix the roto data given by others because we can not see the roto key frames in our own nuke, in this case I need to check each frame to ensure if the key frame causes the problem. It may take less time to roto all over again. So I think the roto artists should check the quality of the work done and fix the issues before uploading it, however, the coordinator thought it was part of composer’s job.
From my understanding, good collaboration means smooth workflow, but everyone in our group just texted messages in the chat, but no one checked the quality at each stage. This leads us to waste a lot of time doing repetitive work, especially when the project is reaching a deadline, it took time to deal with work that should have been completed at the beginning. At the same time, it also leads to conflicts and disputes in cooperation.
But finally we reached agreement and finished the work. Disagreements are inevitable in teamwork, because everyone is making efforts to achieve better results in their own ways. It is truly valuable to learn how to be more effective and professional through this collab project also in the future.
Through this personal project, I wasted a lot of time entangled in the pre-design phase, and thus realized my lack of knowledge in design. In the future, I will be more inclined to complete an overall project through cooperation with someone more professional in pre-production.
However, I also learned a lot of knowledge that I didn’t know before, such as color and painting. At the same time, I also tried some functions in Maya, such as dynamic curves, blendshape, rigging and skinning, etc. It is important to understand the connections between the nodes of Maya and keep them organized. I still tried some new functions in maya using their examples, and I will explore more next term.
For non-photorealistic rendering, I learned about the performance of lighting and shading and some methods to achieve non-photorealistic rendering in Maya. But these are just the beginning. Next, I will learn more fundamental knowledge of non-photorealistic rendering, learn and compare examples of creating non-realistic rendering in different 3D software or engines, such as Unreal Engine, Blender and Houdini.
Based on previous research, I decided to use a more cartoonish and hand-painted style for the project. I want to keep the roughness of the canvas at a low level but visible, and include outlines.
I would not use too much irregular watercolor gradient effects, especially on characters. I would use 3 types of shading: highlight, halftone and shadows, but I will blur the boundaries between them. It will be more like alcohol-based marker than traditional watercolor.
、
、
、
、
、
、
、
、
、
、
Hence I chose to use Fra、yed stylization and amended some attributes here.
https://artineering.io/styles/frayed/#style-attributes
There are many canvas texture to choose. To prevent shower door effect, we can enable canvas advection with vfx control in sfx material settings. Since I used minimum canvas scale, shower door effects could be ignored without canvas advection.
Ambient Occlusion (AO) darkens the image in areas that are hard to reach for the ambient light due to the local shape of the geometry (e.g. concavities, crevices, holes). Note that this effect depends only on the geometry (and the viewpoint, to a lesser extent), and not on the lights present in the scene.
With the Watercolor and Frayed style, the AO term modulates the pigment density, resulting in darker colors in occluded areas. It just affect the contrast of the whole scene generally.
For this cake I just enabled highlight and specularity to make it look better.
Light color and shade color can be changed for different materials, so it is better to separate materials for different parts. I tend to choose purple as shade color for yellow objects, choosing black as shade color might look a little boring.
Outlines can be created by using a larger mesh with reversed normals.(single sided)
It also can be created directly in Maya Software renderer by assigning outlines under toon menu. Attributes can be adjusted to achieve different results.
However, with MNPRX plugin, outlines can be created automatically and can be customised to certain areas and darkened or lightened using paint tools. It is better to paint vertices before skinning. Once the skinning is completed and then use paint tools, the vertex color node will be retained in the history, resulting in slower real-time rendering. Keying paint effects will also lead to slow rendering.
For character’s hair animation, I tried to use nHair system in maya to automatically generate the movements and collisions. It can be used in many applications, not just for hair.
We have two types of curves here: input and output. Input curves simply defines the original shape, output controls the dynamics.
Firstly I rigged the hair and parent it to the head joint. Draw an EP curve along the skeleton.
Then delete the history and bind skin in hierarchy mode. Paint the weight if necessary. Hide the mesh.
Make the curve dynamic under nHair. The follicles and output curves would be created. Make the follicle and the corresponding output curve under the same hierarchy. Create the IK bone.
In the follicleShape, set Point Lock to Base.
To play the simulation, select nSolver > Interactive Playback.
To stabilize the dynamic curve’s behavior, adjust the Bend Resistance attribute value in the Dynamic Properties section of the hairSystemShape of the Attribute Editor.
To make the collision between any mesh, just create passive collider under nCloth.
In order to keep hair shape not change too much, I set start curve attract here to help.
Old version hair simulation dynamics tutorials :
http://k9port.blogspot.com/2013/11/dynamic-hair-tutorial-part-1-setting-it.html
nHair system introduction:
https://www.linkedin.com/learning/maya-nhair/controlling-collisions?u=57077561