1.15 Learning Unity game development from 0--game UI

In the last chapter, we left the last task, which needs to support the mouse to control the crosshair for design. Then the crosshair is essentially a picture that is always presented on the screen. Of course, you can use a 3D object to make it. The previous explanation When rendering the concept, we also mentioned that our screen is the near-cut plane of the camera. As long as we keep the 3D object in the near-cut plane, we can ensure that the object looks like it is always on the screen.

But doing so requires you to place objects in a possibly extremely small area (the near-cut plane is often only 0.1 or smaller), which is very inconvenient, and the 3D objects themselves are not specially processed for the content required by the UI, so you have to If you make your own wheels, then Unity provides a set of built-in solutions for the convenience of making UI in the game: UGUI.

UGUI

UGUI is actually a name. In essence, Unity provides a set of special components. The rendering process of this set of components is not Mesh Renderer, but is specialized for the content rendered on the screen. If you have done traditional software before UI development, then frameworks like Qt, WPF, MFC will have many controls such as Button, Label, etc., to facilitate and quickly build an interface. Obviously UGUI should also provide similar things.

Then we start to create UI content now, the same path as creating ordinary 3D objects, we also right-click in the Hierarchy and select UI->Image, and suddenly many things will appear:

Let's explain slowly:

  • Canvas object:

The reason why I call it an object here is because there is a component in this object called Canvas. In order to distinguish it, we call the GameObject existing in the Hierarchy a Canvas object. We analyze which components are automatically created by this object:

The RectTransform component here is actually inherited from the Transform component. Here, Unity covers the data display of the original Transform component panel and directly displays the RectTransform. It can be considered that Unity has made some unspoken rules by default. After all, it is meaningless to edit the 3D coordinate data of the UI here.

In fact, the parameters configured in RectTransform are similar to the anchor point, positioning, pixel position, and size of our traditional software UI. The reason why it cannot be changed here is that there are no more parent elements with Canvas components on the upper layer of the Canvas object. Well, the Canvas itself represents the full screen, so if you try to adjust the size of the Game window, the value inside will change accordingly. If it is a child element, it can be modified here, as we will see later.

The Canvas component is a function similar to the Mesh Renderer component, which carries the rendering of the UI content contained in all sub-elements under the object. Compared with rendering 3D objects, we need to provide things like Mesh and Material. Here, Unity shields these details, just need We specify the RenderMode, which display target to render to, etc. Here we only need to pay attention to Render Mode temporarily. By default, it is Screen Space - Overlay. In human terms, Unity will use a set of unspoken rule logic to independently render your UI content to the screen. As for this unspoken rule, it is What, and how to use other RenderMode, we will talk about the usage scenarios in detail in the advanced tutorial, now we only need to know that this component is used to carry the UI display.

Canvas Scaler and Graphics Raycaster can be ignored for the time being, we will talk about it when we meet.

  • EventSystem, this is actually the logic of how to handle input events that the UI system needs to use. There are two components in it, but we don't need to pay attention to it now. Anyway, we only need to know that the UI needs this component to respond to operations such as clicks.

  • Image

The last is the image we need to draw the quasi-heart this time

It can be seen that the value of the RectTransform component can be modified, and we can modify the data inside to adjust the position, size and layout.

The Canvas Renderer component tells the Canvas component in the parent element that there is a UI element that needs to be rendered.

The Image component provides the content that needs to be rendered. You can intuitively see that the Source Image here must be the place where the image we want to display is assigned. Color is to superimpose a color on the image. The default is white here, which means that even if we The assigned image will also display white. Let's ignore the other parameters.

After explaining the components created by default, in fact, the Canvas and EventSystem here are necessary for displaying an Image. Unity checks that there are no these two things in your scene and creates them by default. If you create one in the Canvas object now Image, only a new Image will appear, and no additional Canvas and EventSystem will be created.

So now let's look at the phenomenon in the scene:

You can see that a huge white block appeared in our scene, and a white block was also displayed in the lower left corner of the Game, which is actually the effect displayed by our Image.

So how to edit it? Let's slide the mouse wheel to zoom out the perspective of the Scene:

You can see the part displayed by our UI (white wireframe) and the position displayed by our white Image, and our Cube and Wall are too far away to see clearly. This is actually an effect that Unity gives us by default. When the RenderMode is Screen Sapce -Overlay, a huge screen will be generated at the position of the coordinate origin to carry our UI display content. Of course, this is only in the Scene window. , and the Game window will be honestly displayed on the screen.

Knowing this, we actually know that UI editing is no different from editing 3D scene objects. We can also drag the coordinate axis arrows to move our Image. Of course, we have perspective projection in the 3D perspective. If We want to do graphic design, so there is a 2D button on the toolbar at the top of the Scene window, which will directly change to 2D mode when pressed, which makes it easier to adjust the UI.

Make crosshair pictures

Of course, our current need is to make a crosshair, not a white block, so we need to find a crosshair picture, just search for one:

http://static.fotor.com.cn/assets/stickers/freelancer_ls_20180125_26/ba9b50fb-1efd-4854-928b-0b40ae26e36f_medium_thumb.jpghttp://static.fotor.com.cn/assets/stickers/freelancer_ls_20180125_26/ba9b50fb-1efd-4854-928b-0b40ae26e36f_medium_thumb.jpg

This link is a jpg image, save it yourself and put it anywhere under the Assets folder of the project.

I changed the name here to aim, select it, and then specify the purpose of this image on the Inspector panel:

Select Sprite (2D and UI), this type is the type required by our Image component Source Image, and then don’t forget to click Apply to make this selection effective. After it takes effect, you can see that our image normally displays a transparent background.

Then the next step is to assign this image to the Image component by dragging and dropping:

If you find that the drag-and-drop assignment fails, check to see if the previous step was not done well. If the type is wrong, the assignment will not be allowed.

After the assignment is successful, you can see our crosshair picture displayed in the Game window on the left.

If you think it is not conspicuous enough, you can try to change other pictures by yourself, or change the Color parameter in the Image component to red and superimpose it on it, so that it is more vivid.

Dynamically modify the position of UI elements

As mentioned above, we only need to modify the PosX or PosY of RectTransform to modify the position of the crosshair. What we need is to dynamically adjust the crosshair position according to the position of the mouse when running.

Since we need to dynamically adjust according to the logic, of course, we need to create a new component and place it in Image to adjust it through code. Soon, I asked GPT to write one for me.

using UnityEngine;

public class AimController : MonoBehaviour
{
    private RectTransform rectTransform;

    private void Start()
    {
        rectTransform = GetComponent<RectTransform>();
    }

    private void Update()
    {
        rectTransform.anchoredPosition = Input.mousePosition;
    }
}

What is meant here is that although RectTransform inherits from Transform, we will not use position directly. It should be because this coordinate is the coordinate of 3D space, not the coordinate under our UI system. Instead, we use anchoredPosition, which is a Vector2. It happens to be our screen coordinates X and Y, and we directly assign the mouse coordinates to them.

run to see?

Strange, why is there a fixed distance between the crosshair and the mouse? Let's take a look at the description of this anchoredPosition:

https://docs.unity3d.com/ScriptReference/RectTransform-anchoredPosition.htmlhttps://docs.unity3d.com/ScriptReference/RectTransform-anchoredPosition.html

The position of the pivot of this RectTransform relative to the anchor reference point.

The translation is that this coordinate is the coordinate of the pivot point of this component relative to the anchor point.

speak English:

Pivot is the so-called center point. Whether it is moving or rotating, the Image must have a center point. Then these are calculated based on this center point. The image itself has a shape and size, so the center point can be compared to the real one. Adjust the center of the shape.

Anchors feel very mysterious. In fact, they are relative to the layout of the parent element. Clicking on this place inside allows you to choose several layout methods intuitively:

For detailed definitions, you can read the official documentation:

https://docs.unity3d.com/Packages/[email protected]/manual/UIBasicLayout.htmlhttps://docs.unity3d.com/Packages/[email protected]/manual/UIBasicLayout.html

It can be seen that our default layout is centered relative to the parent element, but Input.mousePosition starts from the lower left corner as the coordinate origin, so here we are equivalent to offsetting the offset of a mouse relative to the lower left corner from the center of the screen. Then the mouse will always be half a screen away from the crosshair.

Then we can simply select the alignment in the lower left corner to modify the Anchor:

After the modification, run the game again to see if Zhuoxin has followed suit.

fire bullets

Well, now that we have the crosshair, we only need to fire the bullet. Imagine that if we want to aim at the crosshair and fire the bullet, then the crosshair should be the position where the aiming distance is expected to be hit, that is to say The crosshair is covered on the object to be aimed, and to put it bluntly, the extension line of the line from your eyes to the crosshair can intersect the object to be aimed.

The position of the eyes is actually the position of the camera, which is easy to understand.

But our aim is on the screen? It doesn't matter, as we said before, the screen is a close-cut plane, and any point on the screen can be converted to 3D space coordinates.

This common algorithm is encapsulated by Unity for us:

https://docs.unity3d.com/ScriptReference/Camera.ScreenToWorldPoint.htmlhttps://docs.unity3d.com/ScriptReference/Camera.ScreenToWorldPoint.html

Seeing that we need to pass in the screen coordinates xy, but what is another z? It is the distance from the camera (in fact, a point on the screen can cover countless points in the 3D world, and these points are determined by the distance from the camera). That is, we have seen the value of Near in the Camera component before.

And Camera is the MainCamera we use, so let's modify the script:

In AimController, we calculate the direction of the camera to the crosshair, and pass it to our previous fire control code. Similarly, if we need to refer to other functional modules, in addition to passing instances directly in the traditional code design, if the class itself is MonoBehaviour, Then we can provide drag and drop assignments in the editor by setting it as a public member (although it is not very friendly to programmers).

using UnityEngine;

public class AimController : MonoBehaviour
{
    public Camera mainCamera;
    public FireController fireController;

    private RectTransform rectTransform;

    private void Start()
    {
        rectTransform = GetComponent<RectTransform>();
    }

    private void Update()
    {
        rectTransform.anchoredPosition = Input.mousePosition;
        // 获取屏幕上当前鼠标位置(也就是准心位置)所在的3D空间位置
        Vector3 aimWorldPosition = mainCamera.ScreenToWorldPoint(new Vector3(Input.mousePosition.x,
            Input.mousePosition.y, mainCamera.nearClipPlane));
        // 通过坐标相减可以得到方向向量
        Vector3 fireDirection = aimWorldPosition - mainCamera.transform.position;
        // 归一化后传递给开火控制脚本
        fireController.SetDirection(fireDirection.normalized);
    }
}

Note that a new public member is added here to allow me to assign values ​​in the editor to whom the SetDirection is called, so we need to drag and assign the FireController object we created before in the AimController of the Image, and we need to pay attention Yes, we can drag and drop here not because the object is called FireController, but because the GameObject has a component of the FireController class. This also means that we can not only assign the GameObject, but directly assign the component itself. It is also ok and more recommended of.

Ok, the next step is to rewrite the FireController. Before, we simply created the bullet without providing the firing direction. Now that we have received the direction from the aiming function, we need to set the flight direction when the bullet is created:

using UnityEngine;

public class FireController : MonoBehaviour
{
    private bool isMouseDown = false;
    private float lastFireTime = 0f;
    private Vector3 fireDirection;
    public float fireInterval = 0.1f;
    public AddVelocity bullet;

    void Update()
    {
        if (Input.GetButton("Fire1"))
        {
            if (!isMouseDown)
            {
                isMouseDown = true;
                lastFireTime = Time.time;
                Fire();
            }
            else if (Time.time - lastFireTime > fireInterval)
            {
                lastFireTime = Time.time;
                Fire();
            }
        }
        else
        {
            isMouseDown = false;
        }
    }

    void Fire()
    {
        // 在这里实现每次触发的逻辑

        // 创建新的子弹,每次都是从模板bullet复制一个出来
        AddVelocity newBullet = Object.Instantiate(bullet);
        newBullet.SetDirection(fireDirection);
    }

    public void SetDirection(Vector3 direction)
    {
        fireDirection = direction;
    }
}

It can be seen that we added the implementation of the SetDirection function and stored the value as a member variable, and then we need to pass in this speed to each newly created bullet, so the bullet also needs to have an interface, but we used to create the bullet directly with GameObject, here we are just being lazy, and directly use the only component class AddVelocity we wrote above. Note that our bullet member has changed from the original GameObject class to AddVelocity, so that we will be able to instantiate GameObject directly with the AddVelocity type , and the return value is AddVelocity, which is equivalent to instantiating the GameObject and then using GetComponent to obtain the above AddVelocity component, but the writing method is more elegant and efficient (each dynamic call to GetComponent is relatively slower).

So the last thing we have to do is to implement SetDirection in AddVelocity to give the bullet a flight direction:

using UnityEngine;

public class AddVelocity : MonoBehaviour
{
    public float speed; // 初始速度
    public float lifeTime = 5.0f;
    private float lifeStartTime;
    private Vector3 fireDirection;

    void Start()
    {
        Rigidbody rb = GetComponent<Rigidbody>();
        if (rb != null)
        {
            rb.velocity = fireDirection * speed;
        }

        lifeStartTime = Time.time;
    }

    void Update()
    {
        if (Time.time - lifeStartTime > lifeTime)
        {
            Destroy(gameObject);
        }
    }

    public void SetDirection(Vector3 direction)
    {
        fireDirection = direction;
    }
}

In addition to storing the direction here, we also modified the previous initial velocity Vector3 to a speed variable of float type, because we no longer need to manually specify the velocity direction, instead we need to adjust the intensity of the initial velocity, that is, we will have a The coefficient is multiplied, so in the end our initial speed is fireDirection * speed.

Because the members are modified, we also need to modify the speed value of the Bullet prefab to a more comfortable value, such as 10.

It should be noted that because we modified the type of the bullet member in FireController, the content of the drag and drop assignment in the editor was bound to the GameObject type, so when we changed the code, the assignment here has been invalidated, and Unity It will clear it up for us. Don't forget to drag the Bullet prefab again to assign it to the bullet member of FireController.

Okay, let's run the game and see?

Good guy, the crosshair and the bullet are separate, the bullet has not changed its direction, so where is the problem? Remember what we said in the last chapter, the position of the newly created GameObject is the origin by default (of course Prefab itself already has position information, so let’s talk about it separately), then no matter how we modify the direction, the launch actually starts from the origin, not at all. Our eyes are shot, so we need to modify the starting point of this launch, so we currently have two methods:

  1. Look at the current position information of the camera, and then hand-write it to the code created by the bullet to assign the value. This is definitely not in line with the purpose of easy maintenance in the future. As long as the camera changes its position, it will be over.
  2. Pass the location information of the camera

Obviously the second one is better, so it is logical that we need to add a new firing point in FireController:

using UnityEngine;

public class FireController : MonoBehaviour
{
    private bool isMouseDown = false;
    private float lastFireTime = 0f;
    private Vector3 fireDirection;
    public float fireInterval = 0.1f;
    public AddVelocity bullet;
    public Transform fireBeginPosition;

    void Update()
    {
        if (Input.GetButton("Fire1"))
        {
            if (!isMouseDown)
            {
                isMouseDown = true;
                lastFireTime = Time.time;
                Fire();
            }
            else if (Time.time - lastFireTime > fireInterval)
            {
                lastFireTime = Time.time;
                Fire();
            }
        }
        else
        {
            isMouseDown = false;
        }
    }

    void Fire()
    {
        // 在这里实现每次触发的逻辑

        // 创建新的子弹,每次都是从模板bullet复制一个出来
        AddVelocity newBullet = Object.Instantiate(bullet);
        newBullet.transform.position = fireBeginPosition.position;
        newBullet.SetDirection(fireDirection);
    }

    public void SetDirection(Vector3 direction)
    {
        fireDirection = direction;
    }
}

We have added a fireBeginPosition variable, but it is of Transform type. We have always said that the Transform component stores the position information of the object. We don't need to assign a camera, all we want is the position.

After getting the position, we assign newBullet.transform.position = fireBeginPosition.position; to the newly created bullet to change its initial position when the bullet is created.

After changing the code, we drag the Main Camera to the fireBeginPosition member of FireController in the editor to assign values.

Then let's run the game and see?

1.15 Learning Unity game development from 0--game UI

It doesn't look like a bullet is fired, but a trebuchet, but at least the aim is working anyway.

thinking questions

  1. As we mentioned above, the bullet should actually be fired from the eyes in order to be more in line with the intuition of aiming where it should hit, but in fact the bullet should be shot from the barrel, and the barrel position is definitely not the position of the eyes, so let Is it possible to shoot a bullet out of the barrel, and if so, what to do when the eye can aim at the object but the barrel is blocked by a wall?
  2. In the end, we successfully fired the bullet in the expected direction, but the speed was a bit slow. You try to increase the speed parameter of the bullet configuration so that the bullet can fly straighter, but you find that the bullet has passed through the wall. Why is there no collision? rebound? How to solve?

next chapter

In this chapter, we explained in detail how to get started with UI production in Unity, and then finished the whole set of aiming and shooting logic by the way, and straightened out how to communicate and transfer information between multiple logics in the scene.

In the next chapter, we will add character movement. On the premise of not involving more complex things such as character skeleton animation, we will simply implement a three-person perspective operation mode and character movement control method, so as to realize the three-person shooting demo.

Guess you like

Origin blog.csdn.net/z175269158/article/details/130194010