Master game modeling tutorial: use Maya and XGen for character production

Hello~ everyone

 

Today I bring you a sharing graphic of rendering techniques

This graphic comes from 3D artist Coss Mousikides

Introduction

I am originally from Athens, Greece and have been a character artist in the game industry since 2013. The first step of my career was to study digital film and 3D animation technology after graduating from Staffordshire University in the UK.

I mainly work in triple AAA studios such as EA DICE and Microsoft RARE (Battlefield V, StarWars Battlefront). I have been a freelancer for many years and have obtained other titles that are currently undisclosed.

 

Project Objectives

Nowadays, it is very common to find works that can prove the artist's technical ability. I find that there is a lack of storytelling ability in simple T-shaped grids or close-up shots of portraits, and I want to see what I can do to circumvent this trend.

I want to show not only the characters themselves, but also the environment they are in. Poses, expressions, composition and camera angles are good storytelling tools. I find these elements are often ignored by 3D artists.

Some of the technical parts that I personally want to test, such as hair generation and liquid/clothing simulation, are particularly challenging in these areas, because these areas are unique in themselves and require trial and error to work convincingly.

Character making

For characters, I usually try to use the principles from large to small and simple to complex.

For example, in this article, I simulated two characters in a T-shaped pose without expression. All work was done by ZBrush, but the basic mesh created in Maya was used.

 

Once the proportions and basic form are satisfactory, I sculpt the facial expressions and pose accordingly.

Since no scanning is involved, I rely heavily on photos of similar situations in real life (grief of losing a loved one, strong sadness, etc.). This is especially important for sculpting facial expressions, because we are forced to associate with the emotional nuances of human faces. Minor adjustments (such as the tightness of the jaw muscles or the angle of the eyebrows) can affect or destroy credibility. If you want to learn game modeling, you need free software tools and information packages, you can add q group 630838699 [ Poke me to enter the learning community immediately]

hair

All hairs are created in XGen (Maya), which is an amazing tool.

I personally prefer to place the guides manually instead of modifying the splines, because this allows for finer control. There are 5 subsystems in the entire hair system of a woman (called description in XGen).
Each description has its own stack of modifiers to cluster, randomize the length and add random low and high frequency noise to the sub-lines. Almost every description of medium and long hair has these modifiers, but with different setting changes.

 

Each guide is hand-carved. The right-click menu "Copy/Paste Reference Line Shape" function is used on continuous reference lines to avoid shaping each new spline from the beginning.

I find that I often rely on black and white clamp masks to control the number and position of clamps. Here, the noise expression inserted in the clamp parameter performs well.

I have to find a difficult method, but a good trick that might save other users of XGen is to control hair loss more effectively. It is better to put each hair loss direction in a separate description instead of trying to draw . Area mask. I found that the area mask cannot provide detailed control in this case.

For the eyebrows, I first texturized the head and then figured out that the position should fit the expression. Finally, I used Xgen to place the guides one by one, while using the previously projected eye texture as a reference. In order to make each hair of the eyebrows appear exactly where it is placed, the "Generate Basic Body" option in the "Basic Body" tab of XGen is set to the "At Guide Position" option.

For the shadow of children's hair, I used XGen's default hairPhysicalShader and made slight color adjustments.

In order to make the female character's hair look more diverse, I used Anders Langland's AlHair Shader and controlled the change of hair melanin through ramps. Arvid Schneider provides a great tutorial on how to achieve this effect (see below).

skin

The skin setting is relatively basic, it mainly uses a color map and displacement map derived from Zbrush. Anders Langland's Al_Surface shader is used for all skin parts, and AlRemapColor and Multiply Divide nodes are used for color texture to generate epidermis, dermis and subcutaneous layers.

In this order, they are fed into the 1/2 and 3 depth levels of the SSS part of the shader. Use the remapValue node to adjust the contrast of the "roughness" map to make it work better under light. In order to further break the specular reflection of the skin, a fillable skin normal map is added, and its "repeat UV" is set to 23.

 

SSS tends to wash away the many subtle color differences in pores and freckles. To solve this problem, use the soft light blending mode to desaturate the original color map, high pass and blend it on the color map. This is done in Photoshop.

 

In order to enhance the translucent effect of lips and ears, I used a variant of the final rendering. If you render the final 32bit and set it to "Local Adaptation" and set it to 16, you can switch it to 16bit, then It can be implemented in Photoshop. default.

This will produce an over-contrast image with bright colors, which can be selectively masked to the final rendering in the future to enhance the translucency effect. By the way, this can be similarly used to make skin details more "popular".

 

clothing

All the clothing basics were done in Marvelous Designer, and later thickened in Maya, and detailed in ZBrush.

Initially, each outfit was constructed on the T-shaped pose grid of each character.

Then import the final placed mesh into the MD to simulate the cloth to its final position.

This can be done by going to file->import->Obj->Load as the morph target.

MD pattern screen

Import as morph target settings

light

I have an unorthodox habit of initially testing the lighting in the Maya viewport to get the feel of the scene.

When AA is enabled and shadows are enabled, people can get a good enough quality to gauge which type of lighting is suitable for a specific camera angle.

I used Arnold (mtoA 1.3) to render the final shot. The lighting setting is a mix of HDRI, area lighting and aiPhotometric lighting. In addition, a text grid was used and Arnold's mesh_light was enabled to simulate the red neon lights seen next to female characters.

HDRI is used to provide night lighting settings to be used as the basis for the rest of the lighting and as a background. A small area of ​​light is used to forge the background color reflected light on the sidewalk and illuminate the walls. The most interesting part is to try different .ies contours to achieve a believable streetlight effect and use it as the main lighting for the character.

The .ies configuration file is a file used by Arnold's aiPhotometric lighting to simulate the distribution in real life. The photometric data is accurate because it is based on the actual specifications of the lamp manufacturer and is an easy way to obtain actual scattering and attenuation effects.

There are many .ies databases on the Internet, but I got them from here. The official MtoA documentation provides some examples of .ies and how to use them in scenarios:

Rendering

The main reason for choosing Arnold over other renderers in this project is that in production, it seems that the quality of the rendering is more realistic and the local integration with Maya is stable.

In the final analysis, I think it depends on the preference and ease of use of the visual output. These elements may vary greatly among the artists themselves and depend on the style followed by each artist.

Guess you like

Origin blog.csdn.net/shooopw/article/details/108664968