Parallel work

When scheduling work , you can only have one job to do a task. In the game, usually we want to perform the same operation on many objects. It has named IJobParallelFor job type independent to deal with this problem.

Note : "ParallelFor" Unity for implementing the job is IJobParallelFora collective term for any configuration of the interface.

ParallelFor job using NativeArray data as its data source. ParallelFor work across multiple core operation. Each core has a job, each job processing part of the workload. IJobParallelForAct like IJob, but it is not a single Execute method, but Executecalled once method on each item in the data source. There is a method of integer parameters Execute. The index is used to access and manipulate the individual elements of the operations implemented in the data source.

Example ParallelFor job definition:

struct IncrementByDeltaTimeJob: IJobParallelFor
{
    public NativeArray<float> values;
    public float deltaTime;

    public void Execute (int index)
    {
        float temp = values[index];
        temp += deltaTime;
        values[index] = temp;
    }
}

Job scheduling ParallelFor

When the job scheduling ParallelFor must specify the NativeArraylength of the data source to be split. NativeArrayIf there are multiple structures, Unity C # operating system can not know what you want as a data source. C # also told how much the length of the operating system needs Executemethod.


Behind the scenes, more complex job scheduling ParallelFor. When the job scheduling ParallelFor, C # operating system to work into the batch was partitioned between core. Each batch contains a set of Executemethods. Then, C # up a job scheduling system operating in each CPU core Unity native operating system, the machine and the job is done by a number of batches.

ParallelFor divided between core batch jobs


When the local batch job is completed prior to its other operations, it job from other native steal the remaining batch. It can only steal the remaining half of the unit batch job to ensure cache locality .

To optimize the process, you need to specify batch count. Batch control counts the number of jobs you get, and the level of detail work between threads reassigned. Batch count lower (e.g. 1) can work more evenly distributed among threads. It does bring some overhead, so sometimes increase the batch count will be better. Starting from 1 and counting until the batch increases negligible performance gain is an effective strategy.

Example job scheduling ParallelFor

Job Code :

// Job adding two floating point values together
public struct MyParallelJob : IJobParallelFor
{
    [ReadOnly]
    public NativeArray<float> a;
    [ReadOnly]
    public NativeArray<float> b;
    public NativeArray<float> result;

    public void Execute(int i)
    {
        result[i] = a[i] + b[i];
    }
}

The main thread Code :

NativeArray<float> a = new NativeArray<float>(2, Allocator.TempJob);

NativeArray<float> b = new NativeArray<float>(2, Allocator.TempJob);

NativeArray<float> result = new NativeArray<float>(2, Allocator.TempJob);

a[0] = 1.1;
b[0] = 2.2;
a[1] = 3.3;
b[1] = 4.4;

MyParallelJob jobData = new MyParallelJob();
jobData.a = a;  
jobData.b = b;
jobData.result = result;

// Schedule the job with one Execute per index in the results array and only 1 item per processing batch
JobHandle handle = jobData.Schedule(result.Length, 1);

// Wait for the job to complete
handle.Complete();

// Free the memory allocated by the arrays
a.Dispose();
b.Dispose();
result.Dispose();

Guess you like

Origin www.cnblogs.com/longsl/p/11314543.html