Understanding some interfaces of deepxde

The first one is the PDE class

Args:
        geometry: Instance of ``Geometry``.
        pde: A global PDE or a list of PDEs. ``None`` if no global PDE.
        bcs: A boundary condition or a list of boundary conditions. Use ``[]`` if no
            boundary condition.
        num_domain (int): The number of training points sampled inside the domain.
        num_boundary (int): The number of training points sampled on the boundary.
        train_distribution (string): The distribution to sample training points. One of
            the following: "uniform" (equispaced grid), "pseudo" (pseudorandom), "LHS"
            (Latin hypercube sampling), "Halton" (Halton sequence), "Hammersley"
            (Hammersley sequence), or "Sobol" (Sobol sequence).
        anchors: A Numpy array of training points, in addition to the `num_domain` and
            `num_boundary` sampled points.
        exclusions: A Numpy array of points to be excluded for training.
        solution: The reference solution.
        num_test: The number of points sampled inside the domain for testing PDE loss.
            The testing points for BCs/ICs are the same set of points used for training.
            If ``None``, then the training points will be used for testing.
        auxiliary_var_function: A function that inputs `train_x` or `test_x` and outputs
            auxiliary variables.

    Warning:
        The testing points include points inside the domain and points on the boundary,
        and they may not have the same density, and thus the entire testing points may
        not be uniformly distributed. As a result, if you have a reference solution
        (`solution`) and would like to compute a metric such as

        .. code-block:: python

            Model.compile(metrics=["l2 relative error"])

        then the metric may not be very accurate. To better compute a metric, you can
        sample the points manually, and then use ``Model.predict()`` to predict the
        solution on thess points and compute the metric:

        .. code-block:: python

            x = geom.uniform_points(num, boundary=True)
            y_true = ...
            y_pred = model.predict(x)
            error= dde.metrics.l2_relative_error(y_true, y_pred)

    Attributes:
        train_x_all: A Numpy array of points for PDE training. `train_x_all` is
            unordered, and does not have duplication. If there is PDE, then
            `train_x_all` is used as the training points of PDE.
        train_x_bc: A Numpy array of the training points for BCs. `train_x_bc` is
            constructed from `train_x_all` at the first step of training, by default it
            won't be updated when `train_x_all` changes. To update `train_x_bc`, set it
            to `None` and call `bc_points`, and then update the loss function by
            ``model.compile()``.
        num_bcs (list): `num_bcs[i]` is the number of points for `bcs[i]`.
        train_x: A Numpy array of the points fed into the network for training.
            `train_x` is ordered from BC points (`train_x_bc`) to PDE points
            (`train_x_all`), and may have duplicate points.
        train_aux_vars: Auxiliary variables that associate with `train_x`.
        test_x: A Numpy array of the points fed into the network for testing, ordered
            from BCs to PDE. The BC points are exactly the same points in `train_x_bc`.
        test_aux_vars: Auxiliary variables that associate with `test_x`.

translate

参数:
geometry:“geometry”的实例。
pde:全局pde或pde列表。如果没有全局PDE,则为“None”。
bcs:一个边界条件或一系列边界条件。如果没有,请使用“[]”边界条件。
num_domain (int):在域内采样的训练点的数量。
num_boundary (int):在边界上采样的训练点的数量。
train_distribution (string):样本训练点的分布。之一“uniform”(等距网格)、“pseudo”(伪随机)、“LHS”(拉丁超立方体采样),“Halton”(Halton序列),“Hammersley”(Hammersley序列)或“Sobol”(Sobol序列)。
anchors:一个Numpy数组的训练点,除了' num_domain '和' num_boundary '采样点。
exclusions:一个Numpy数组,用于排除训练点。
solution:精确解
num_test:在域内采样用于测试PDE损耗的点数。
bc / ic的测试点是用于训练的同一组点。如果为“None”,则训练点将用于测试。
auxiliary_var_function:一个输入' train_x '或' test_x '并输出的函数
辅助变量。

警告:
测试点包括域内点和边界点;
它们可能没有相同的密度,因此整个测试点可能
不是均匀分布的。因此,如果你有一个参考解
(“解决方案”),并希望计算一个metric,例如
. .python代码:

Model.compile(metrics=["l2相对错误"])

那么度量可能不是很准确。为了更好地计算度量,您可以
手动采样点,然后使用' ' Model.predict() ' '来预测
解这些点并计算度规:

. .python代码:

X = geom。uniform_points (num边界= True)
Y_true =…
Y_pred = model.predict(x)
错误= dde.metrics。l2_relative_error (y_true y_pred)

属性:
train_x_all: PDE训练点的Numpy数组。“train_x_all”是
无序的,没有复制。如果存在PDE,那么
' train_x_all '作为PDE的训练点。
train_x_bc: bc训练点的Numpy数组。“train_x_bc”是
在训练的第一步由' train_x_all '构造,默认情况下为
当' train_x_all '改变时不会更新。要更新' train_x_bc ',请设置它
到' None '并调用' bc_points ',然后更新损失函数
' ' ' ' model.compile()。
Num_bcs (list): ' Num_bcs [i] '是' bcs[i] '的点数。
train_x:输入网络用于训练的点的Numpy数组。
' train_x '从BC点(' train_x_bc ')到PDE点排序
(' train_x_all '),并且可以有重复的点。
train_aux_vars:与' train_x '相关联的辅助变量。
test_x:输入网络用于测试的点的Numpy数组,有序
从bc到PDE。BC点与' train_x_bc '中的点完全相同。
test_aux_vars:与' test_x '相关的辅助变量。

At the same time, we will use the add_anchors part later.
Insert image description hereIt can be seen that add_anchors will only continuously add data within the domain, but the data of bc will not change.

Similar:
Insert image description hereReplacement also does not change the data of bc.

Insert image description hereLooking at the comments here, it seems that the data has been deformed or something.

Insert image description here):
"""Configure the training model.

Parameters:
optimizer: String name of the optimizer or backend optimizer class
instance.
lr (float): learning rate. For L-BFGS, use
'dde.optimizers.set_LBFGS_options' to set the hyperparameters.
loss: 'loss' is a string name of a loss function or a loss function if all errors use the same loss
. If different errors Using
a different loss, then 'loss' is a list whose size is equal to the
number of errors.
metrics: A list of metrics to evaluate the model during training.
decay (tuple): The name and parameters of the decay to the initial learning rate.
One of the following options one:

-Backend TensorFlow 1.x:

  • ’ inverse_time_decay ’ _: (’ inverse time ', decay_steps, decay_rate)
  • ’ cosine_decay ’ _: (’ cos ', decay_steps, alpha)

-Backend TensorFlow 2.x:

  • ’ InverseTimeDecay ’ _: (’ inverse time ', decay_steps, decay_rate)
  • ’ CosineDecay ’ _: (’ cos ', decay_steps, alpha)

-Backend PyTorch:

  • ’ StepLR ’ _: (“step”, step_size, gamma)

-Backend PaddlePaddle:

——"InverseTimeDecay_
:
("Inverse Time", gamma)

loss_weights: Specifies a list of scalar coefficients (Python floats) to weight the loss contributions. The loss value will be minimized and the model will then be the weighted sum of all individual losses, weighted by the 'loss_weights' coefficient.
external_trainable_variables: a trainable dde. Variable ' ' object or trainable ' ' dde list. variable object. Unknown parameters in the physical system that need to be recovered. ' external_trainable_variables ' will be ignored if the backend is tensorflow.pat.v1.
Trainable 'dde.autocollect variables' object.

A subclass of PDEPointResampler
Insert image description hereresampling.
One is the resampling period.
The following is whether bc and pde are resampled.


About deepxde's bc
bc description
first type boundary conditions:
give the value of the unknown function on the boundary;
DirichletBC
Insert image description here
second type boundary conditions:
give the directional derivative of the unknown function normal outside the boundary
Insert image description here

Boundary expressed as points
Insert image description here

Compare the output (that associates with `points`) with `values` (target data).
If more than one component is provided via a list, the resulting loss will be the addative loss of the provided componets.

Args:
    points: An array of points where the corresponding target values are known and used for training.
    values: A scalar or a 2D-array of values that gives the exact solution of the problem.
    component: Integer or a list of integers. The output components satisfying this BC.
        List of integers only supported for the backend PyTorch.
    batch_size: The number of points per minibatch, or `None` to return all points.
        This is only supported for the backend PyTorch.
    shuffle: Randomize the order on each pass through the data when batching.

Compares the output (associated with the "point") with the "value" (target data).
If more than one component is supplied via a list, the resulting damages will be additional to the components supplied.

参数:
    points:一组点,其中相应的目标值已知并用于训练。
    values:给出问题精确解的标量或二维值数组。
    组件:整数或整数列表。满足此 BC 的输出组件。
        仅后端 PyTorch 支持的整数列表。
    batch_size:每个小批量的点数,或返回所有点的“无”。
        这仅支持后端 PyTorch。
    shuffle:在批处理时随机化每次传递数据的顺序。

Several useful issues
https://github.com/lululxvi/deepxde/issues/161
Is it possible to use DeepXDE to define complex geometries? For example, instead of using the Geometry module, could I read in a set of points that represent the surface of an airfoil?

Currently DeepXDE cannot use a set of points to build geometry.
1. One way to define complex geometries is to use CSG.
2. You can also define your geometry class by inheriting the base class Geometry.
3. Geometry is mainly used to generate training points. You could just define a "fake" geometry like a cube but not use it at all. Specifically, you set num_domain=0, num_boundary, and build all points manually and pass https://deepxde.readthedocs.io/en/latest/modules/deepxde.data.html#deepxde.data.pde.PDE Anchor point transfer in .

https://deepxde.readthedocs.io/en/latest/modules/deepxde.data.html#deepxde.data.pde.PDE

https://github.com/lululxvi/deepxde/issues/64
Can we add training points at a specific location or direction on the domain instead of adding them uniformly or randomly across the domain?
For example:

In the case of a square type domain.

You should only take training points on the straight line y=x, or take a circular area inside the square geometry.

Guess you like

Origin blog.csdn.net/pjm616/article/details/131014692