Skip to content

Paddle 2.0beta New API List

XiaoguangHu edited this page Sep 9, 2020 · 2 revisions

以下表格为Paddle 2.0-beta版本对比Paddle 1.8版本新增的API列表

Paddle 2.0-beta新增API 新增API说明 新增API PR
paddle.set_default_dtype(d) 新增,用于设置创建Tensor和Parameter时的默认数据类型 #26006
paddle.get_default_dtype() 新增,用于获取创建Tensor和Parameter时的默认数据类型 #26006
paddle.numel(x, name=None) 新增,用于返回Tensor包含元素的个数 #26562
paddle.to_tensor(data, dtype=None, place=None, stop_gradient=True) 1. API名称从to_variable修改为to_tensor
2. 输入数据类型支持scalar, list, tuple, Tensor, ComplexTensor
3. 返回值类型从core.VarBase修改为paddle.Tensor
4. 新增参数place用于指定设备为CPU/GPU/固定内存
5. 新增参数stop_gradietn, 可指定是否计算梯度,默认不计算梯度
6. 新增参数dtype, 可指定Tensor类型,非numpy的浮点数默认从paddle.get_default_dtype获取数据类型
#26357
paddle.chunk(x,chunks,dim=0) 新增chunk API, 返回被切分的子Tensor #26314
paddle.masked_select(x,mask,name=None) 新增masked_select Op, 该OP将根据mask Tensor的真值选取输入Tensor元素,并返回一个一维Tensor #26374
paddle.manual_seed(seed) 新增paddle.manual_seed(SEED) api 用于设置paddle运行过程中的全局种子,用于随机数相关的操作的复现。支持cpu和gpu设备。 #26495
#26013
#26786
paddle.bernoulli(x, name=None) 新增bernoulli API,用于创建服从伯努利分布的0,1二元随机Tensor #26511
paddle.normal(mean=0.0, std=1.0, shape=None, name=None) 1. API名称从gaussian_random修改为normal,用于创建正态分布的Tensor
2. 支持为每个元素分别指定mean和std
3. 支持从mean或std或shape推断输出Tensor的形状
#26367
paddle.std(x, axis=None, unbiased=True, keepdim=False, name=None) 新增,用于沿axis计算x的方差 #26446
paddle.var(x, axis=None, keepdim=False, unbiased=True, name=None) 新增,用于沿axis计算x的标准差 #26446
paddle.isinf(x, name=None) 新增paddle.tensor.isinf API。返回一个输入的tensor中每个元素是否为inf的boolean tensor。 #26344
paddle.isnan(x, name=None) 新增paddle.tensor.isnan API。返回一个输入的tensor中每个元素是否为nan的boolean tensor。 #26344
paddle.sort(x, axis=-1,descending=False,name=None) 1. 新增sort api,sort api只返回相应的排序结果,不返回相应的index信息 #25514
paddle.topk(x,k,axis=None,largest=True,sorted=True,name=None) 1. 在fluid版本上新增sorted,largest属性 #26494
paddle.meshgrid(*tensors, name=None) api 入参 由meshgrid(input, name=None) 改成?meshgrid(*args, **kwargs), 支持列表作为输入。 #25319
paddle.tril(x, diagonal=0, name=None) api由 tril(input, diagonal=0, name=None) 改成?tril(x, diagonal=0, name=None) #25529
paddle.triu(x, diagonal=0, name=None) qpi 由triu(input, diagonal=0, name=None)改成triu(x, diagonal=0, name=None) #25529
paddle.bmm(x, y, name=None) 文档升级 #25529
paddle.cholesky(x, upper=False, name=None) 增加对奇异矩阵分解的报错信息,增加name参数 #25860
paddle.inverse(x, name=None)
增加对奇异矩阵分解的报错信息,input改为x #25860
paddle.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', weight_attr=None, bias_attr=None, data_format='NCL', name=None)
paddle.nn.Conv1d.forward(x)
新增Conv1D,用于对1维序列特征进行卷积操作。 #26350
paddle.nn.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', weight_attr=None, bias_attr=None, data_format='NCL', name=None)
paddle.nn.ConvTranspose1d.forward(x, output_size=None)
新增ConvTranspose1d,用于对1维序列特征进行转置卷积操作。 #26356
paddle.nn.MaxPool1d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False, data_format='NCL', name=None)
paddle.nn.MaxPool2d.forward(x)
1. 新增 API,名称paddle.nn.functional.MaxPool1d;
2. 该API创建一个python 类,用于实现1D最大池化
#26331
paddle.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False, data_format='NCHW', name=None)
paddle.nn.MaxPool2d.forward(x)
1. 新增 API,名称paddle.nn.functional.MaxPool2d,该API从旧API pool2d中拆分而来;
2. 实现2D最大池化功能;
3. 与旧API pool2d相比,去掉pool_type, global_pooling,use_cudnn参数;pool_size , pool_stride, pool_padding参数改为kernel_size, stride,padding;
4. 增加返回最大池化索引的功能,对应参数return_indices
#26331
paddle.nn.MaxPool3d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False, data_format='NCDHW', name=None)
paddle.nn.MaxPool3d.forward(x)
1. 新增 API,名称paddle.nn.functional.MaxPool3d,该API从旧API pool3d中拆分而来;
2. 实现3D最大池化功能;
3. 与旧API pool3d相比,去掉pool_type, global_pooling,use_cudnn参数;pool_size , pool_stride, pool_padding参数改为kernel_size, stride,padding;
4. 增加返回最大池化索引的功能,对应参数return_indices
#26331
paddle.nn.AvgPool1d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None, data_format='NCL', name=None)
paddle.nn.AvgPool1d.forward(x)
1. 新增 API,名称paddle.nn.AvgPool1d;
2. 该API创建一个python类,实现1D平均池化功能
#26331
paddle.nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None, data_format='NCHW', name=None)
paddle.nn.AvgPool2d.forward(x)
1. 新增 API,名称paddle.nn.functional.AvgPool2d,该API从旧API pool2d中拆分而来;
2. 实现2D平均池化功能;
3.?与旧API pool2d相比,去掉pool_type, global_pooling,use_cudnn参数;pool_size , pool_stride, pool_padding参数改为kernel_size, stride,padding; exclusive 参数名字改为counnt_include_pad,作用与exclusive ?作用相反
#26331
paddle.nn.AvgPool3d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None, data_format='NCDHW', name=None)
paddle.nn.AvgPool3d.forward(x)
1. 新增 API,名称paddle.nn.functional.AvgPool3d,该API从旧API pool3d中拆分而来;
2. 实现3D平均池化功能;
3.?与旧API pool3d相比,去掉pool_type, global_pooling,use_cudnn参数;pool_size , pool_stride, pool_padding参数改为kernel_size, stride,padding; exclusive 参数名字改为counnt_include_pad,作用与exclusive ?作用相反
#26331
paddle.nn.AdaptiveMaxPool1d(output_size, return_indices=False, data_format='NCL', name=None)
paddle.nn.AdaptiveMaxPool1d.forward(x)
新增paddle.nn.AdaptiveMaxPool1d API。返回输入的自适应最大1d池化。 #26483
paddle.nn.AdaptiveMaxPool2d(output_size, return_indices=False, data_format='NCHW', name=None)
paddle.nn.AdaptiveMaxPool2d.forward(x)
新增paddle.nn.AdaptiveMaxPool2d API。返回输入的自适应最大2d池化。 #26483
paddle.nn.AdaptiveMaxPool3d(output_size, return_indices=False, data_format='NCDHW', name=None)
paddle.nn.AdaptiveMaxPool3d.forward(x)
新增paddle.nn.AdaptiveMaxPool3d API。返回输入的自适应最大3d池化。 #26483
paddle.nn.AdaptiveAvgPool1d(output_size, data_format='NCL', name=None)
paddle.nn.AdaptiveAvgPool1d.forward(x)
新增paddle.nn.AdaptiveAvgPool1d API。返回输入的自适应平均1d池化。 #26331
paddle.nn.AdaptiveAvgPool2d(output_size, data_format='NCHW', name=None)
paddle.nn.AdaptiveAvgPool2d.forward(x)
新增paddle.nn.AdaptiveAvgPool2d API。返回输入的自适应平均2d池化。 #26369
paddle.nn.AdaptiveAvgPool3d(output_size, data_format='NCDHW', name=None)
paddle.nn.AdaptiveAvgPool3d.forward(x)
新增paddle.nn.AdaptiveAvgPool3d API。返回输入的自适应平均3d池化。 #26369
paddle.nn.ReflectionPad1d(padding, data_format='NCL', name=None)
paddle.nn.ReflectionPad1d.forward(x)
新增,使用reflection模式对3维输入tensor进行pad #26106
paddle.nn.ReflectionPad2d(padding, data_format='NCHW', name=None)
paddle.nn.ReflectionPad2d.forward(x)
新增,使用reflection模式对4维输入tensor进行填充 #26106
paddle.nn.ReplicationPad1d(padding, data_format='NCHW', name=None)
paddle.nn.ReplicationPad1d.forward(x)
新增,使用replication模式对3维输入tensor进行填充 #26106
paddle.nn.ReplicationPad2d(padding, data_format='NCHW', name=None)
paddle.nn.ReplicationPad2d.forward(x)
新增,使用replication模式对4维输入tensor进行填充 #26106
paddle.nn.ReplicationPad3d(padding, data_format='NCHW', name=None)
paddle.nn.ReplicationPad3d.forward(x)
新增,使用replication模式对5维输入tensor进行填充 #26106
paddle.nn.ZeroPad2d(padding, data_format='NCHW', name=None)
paddle.nn.ZeroPad2d.forward(x)
新增,对4维输入tensor进行补零填充 #26106
paddle.nn.ConstantPad1d(padding, value, data_format='NCL', name=None)
paddle.nn.ConstantPad1d.forward(x)
新增,对3维输入tensor按照常量值进行填充 #26106
paddle.nn.ConstantPad2d(padding, value, data_format='NCHW', name=None)
paddle.nn.ConstantPad2d.forward(x)
新增,对4维输入tensor按照常量值进行填充 #26106
paddle.nn.ConstantPad3d(padding, value, data_format='NCDHW', name=None)
paddle.nn.ConstantPad3d.forward(x)
新增,对5维输入tensor按照常量值进行填充 #26106
paddle.nn.ELU(alpha=1.0, name=None)
paddle.nn.ELU.forward(x)
新增,用于计算ELU激活值的class #26304
paddle.nn.Hardshrink(threshold=0.5, name=None)
paddle.nn.Hardshrink.forward(x)
新增,用于计算hardshrink激活值的class #26198
paddle.nn.Hardtahn(min=-1.0, max=1.0, name=None)
paddle.nn.Hardtahn.forward(x)
新增,用于计算hardtanh激活值的class #26431
paddle.nn.MultiHeadAttention(embed_dim, num_heads, dropout=0., kdim=None, vdim=None, need_weights=False, weight_attr=None, bias_attr=None) 新增,实现multi_head_attention计算 #26418
paddle.nn.MultiHeadAttention.forward(self, query, key, value, attn_mask=None, cache=None) 新增,实现multi_head_attention计算 #26418
paddle.nn.ReLU6(name=None)
paddle.nn.ReLU6.forward(x)
新增用于计算ReLU6激活值的class #26376
paddle.nn.Sigmoid(name=None)
paddle.nn.Sigmoid.forward(x)
新增Sigmoid Layer #26171
paddle.nn.Tanh(name=None)
paddle.nn.Tanh.forward(x)
新增用于计算Tanh激活值的class #26357
paddle.nn.TanhShrink(name=None)
paddle.nn.TanhShrink.forward(x)
新增用于计算TanhShrink激活值的class #26376
paddle.nn.Softmax(axis=-1, name=None)
paddle.nn.Softmax.forward(x)
新增,用于计算Softmax激活值的class #26431
paddle.nn.BatchNorm1d(num_features, epsilon=1e-05, momentum=0.9, track_running_stats=True, weight_attr=None, bias_attr=None, data_format=""NCL"", name=None)
paddle.nn.BatchNorm1d.forward(x)
新增, 用于支持[N,C]或[N,C,L]格式的BatchNorm #26465
paddle.nn.BatchNorm2d(num_features, epsilon=1e-05, momentum=0.9, track_running_stats=True, weight_attr=None, bias_attr=None, data_format=""NCHW"", name=None)
paddle.nn.BatchNorm2d.forward(x)
新增, 用于支持[N,C,H,W]格式的BatchNorm #26465
paddle.nn.BatchNorm3d(num_features, epsilon=1e-05, momentum=0.9, track_running_stats=True, weight_attr=None, bias_attr=None, data_format=""NCDHW"", name=None)
paddle.nn.BatchNorm3d.forward(x)
新增, 用于支持[N,C,D,H,W]格式的BatchNorm #26465
paddle.nn.SyncBatchNorm(num_features, epsilon=1e-05, momentum=0.9, track_running_stats=True, weight_attr=None, bias_attr=None, data_format=""NCHW"", name=None)
paddle.nn.SyncBatchNorm.forward(x)
torch.nn.SyncBatchNorm.convert_sync_batchnorm(Layer)
新增, 用于支持[N,C]或[N,C,L]格式的BatchNorm #26032
paddle.nn.SyncBatchNorm(num_features, epsilon=1e-05, momentum=0.9, track_running_stats=True, weight_attr=None, bias_attr=None, data_format=""NCHW"", name=None)

paddle.nn.SyncBatchNorm.forward(x)

torch.nn.SyncBatchNorm.convert_sync_batchnorm(Layer)
新增, 用于支持[N,C,H,W]格式的BatchNorm #26688
paddle.nn.InstanceNorm1d(num_features, epsilon=1e-05, momentum=0.9, track_running_stats=True, weight_attr=None, bias_attr=None, data_format=""""NCL"""", name=None)

paddle.nn.InstanceNorm1d.forward(x)
新增, 用于支持[N,C,D,H,W]格式的InstanceNorm #26465
paddle.nn.InstanceNorm2d(num_features, epsilon=1e-05, momentum=0.9, track_running_stats=True, weight_attr=None, bias_attr=None, data_format=""""NCHW"""", name=None)
paddle.nn.InstanceNorm2d.forward(x)
1..去掉act, dtype, shift, begin_norm_axis参数 2.weight_attr, bias_attr来控制affine参数,当设为false不scale/shift #26465
paddle.nn.InstanceNorm3d(num_features, epsilon=1e-05, momentum=0.9, track_running_stats=True, weight_attr=None, bias_attr=None, data_format=""""NCDHW"""", name=None)
paddle.nn.InstanceNorm3d.forward(x)
新增,由BatchNorm1d/2d/3d调用 #26465
paddle.nn.SimpleRNN(input_size, hidden_size, num_layers=1, activation=""tanh"",
direction=""forward"", dropout=0.,
time_major=False, weight_ih_attr=None, weight_hh_attr=None, bias_ih_attr=None, bias_hh_attr=None, name=None)
paddle.nn.SimpleRNN.forward(self, inputs, initial_states=None, sequence_length=None)
新增 paddle.nn.SimpleRNN #26588
paddle.nn.LSTM(input_size, hidden_size, num_layers=1, direction=""forward"", dropout=0., time_major=False, weight_ih_attr=None, weight_hh_attr=None, bias_ih_attr=None, bias_hh_attr=None, name=None)
paddle.nn.LSTM.forward(self, inputs, initial_states=None, sequence_length=None)
新增 paddle.nn.LSTM #26588
paddle.nn.GRU(input_size, hidden_size, num_layers=1, direction=""forward"", dropout=0., time_major=False, weight_ih_attr=None, weight_hh_attr=None, bias_ih_attr=None, bias_hh_attr=None, name=None)
paddle.nn.GRU.forward(self, inputs, initial_states=None, sequence_length=None)
新增 paddle.nn.GRU #26588
paddle.nn.Transformer(embed_dim, num_heads, dropout=0., kdim=None, vdim=None, need_weights=False, weight_attr=None, bias_attr=None) 新增paddle.nn.Transformer #26418
paddle.nn.Transformer.forward(self, query, key, value, attn_mask=None, cache=None) 新增paddle.nn.Transformer.forward #26418
paddle.nn.TransformerEncoder(self, encoder_layer, num_layers, norm=None) 新增paddle.nn.TransformerEncoder #26418
paddle.nn.TransformerEncoder().forward(self, src, src_mask=None) 新增paddle.nn.TransformerEncoder.forward #26418
paddle.nn.TransformerDecoder(decoder_layer, num_layers, norm=None) 新增paddle.nn.TransformerDecoder #26418
paddle.nn.TransformerDecoder().forward(self, tgt, memory, tgt_mask=None, memory_mask=None, cache=None) 新增paddle.nn.TransformerDecoder.forward #26418
paddle.nn.TransformerEncoderLayer(d_model, nhead, dim_feedforward, dropout=0.1, activation=""relu"", attn_dropout=None, act_dropout=None, normalize_before=False, weight_attr=None, bias_attr=None) 新增paddle.nn.TransformerEncoderLayer #26418
paddle.nn.TransformerEncoderLayer().forward(self, src, src_mask=None) 新增paddle.nn.TransformerEncoderLayer.forward #26418
paddle.nn.TransformerDecoderLayer(d_model, nhead, dim_feedforward, dropout=0.1, activation=""relu"", attn_dropout=None, act_dropout=None, normalize_before=False, weight_attr=None, bias_attr=None) 新增paddle.nn.TransformerDecoderLayer #26418
paddle.nn.TransformerDecoderLayer().forward(self, tgt, memory, tgt_mask=None, memory_mask=None, cache=None) 新增paddle.nn.TransformerDecoderLayer.forward #26418
paddle.nn.Dropout2D(p=0.5, name=None) 新增,paddle.nn.Dropout2d(p=0.5, data_format='NCHW', name=None),Dropout2D层。 #26111
paddle.nn.Dropout3D(p=0.5, name=None) 新增, paddle.nn.Dropout3d(p=0.5, data_format='NCDHW', name=None),Dropout3d层。 #26111
paddle.nn.AlphaDropout(p=0.5, name=None) 新增,paddle.nn.AlphaDropout(p=0.5, name = None),AlphaDropout实现,保持输出跟输入分布一致。 #26365
paddle.nn.CosineSimilarity(axis=1, eps=1e-08, name=None)
paddle.nn.CosineSimilarity.forward(self, x, y)
1. 计算2个tensor的余弦相似度
2. 支持按特定维度进行计算
3. 支持自定义epsilon值
4. 支持计算维度的broadcast
#26106
paddle.nn.PairwiseDistance(p=2.0, eps=1e-06, keepdim=False, name=None)
paddle.nn.PairwiseDistance.forward(self, x, y)
新增,可用于计算两个张量中两两向量间的距离。 #26033
paddle.nn.loss.CrossEntropyLoss(weight=None, ignore_index=-100, reduction='mean', name=None)
paddle.nn.loss.CrossEntropyLoss.forward(self, input, label)
新增 paddle.nn.CrossEntropy 类 #26478
paddle.nn.CTCLoss(blank=0, reduction='mean', zero_infinity=False, name=None)
paddle.nn.CTCLoss.forward(self, input, label, input_length, label_length)
新增,paddle.nn.CTCLoss(blank=0, reduction='mean'); forward(self, input, label, input_length, label_length) #26384
paddle.nn.KLDivLoss(reduction='mean', name=None)
paddle.nn.KLDivLoss.forward(self, input, label)
新增Class类型的计算kl散度loss的api #25977
paddle.nn.BCEWithLogitsLoss(weight=None, reduction='mean', pos_weight=None, name=None) paddle.nn.BCEWithLogitsLoss.forward(self, input, label) 新增,计算二分类logit和标签的交叉熵损失函数。 #26468
paddle.nn.MarginRankingLoss(margin=0.0, reduction='mean', name=None)
paddle.nn.MarginRankingLoss.forward(self, input1, input2, label)
1. 新增MarginRankingLoss Layer,对应functional下面的margin_ranking_loss #26266
paddle.nn.SmoothL1Loss(reduction='mean', name=None)
paddle.nn.SmoothL1Loss.forward(self, input, label)
新增,用于计算SmoothL1Loss #26398
paddle.nn.PixelShuffle(upscale_factor, name=None)
paddle.nn.PixelShuffle.forward(x)
新增paddle.nn.vision.PixelShuffle 类 #26071
paddle.nn.Upsample(size=None, scale_factor=None, mode='nearest', align_corners=None, name=None)
paddle.nn.Upsample.forward(x)
新增, Upsample动态图实现 #26520
paddle.nn.UpsamplingNearest2d(size=None, scale_factor=None, name=None)
paddle.nn.UpsamplingNearest2d.forward(x)
新增 UpsamplingNearest2d动态图实现 #26520
paddle.nn.UpsamplingBilinear2d(size=None, scale_factor=None, name=None)
paddle.nn.UpsamplingBilinear2d.forward(x)
新增 UpsamplingBilnear2d动态图实现 #26520
paddle.nn.utils.weight_norm(layer, name='weight',dim=0) 新增,weight_norm #26131
paddle.nn.utils.remove_weight_norm(layer, name='weight') 新增,remove_weight_norm #26131
paddle.nn.Flatten(start_axis=1,stop_axis=-1, name=None) 添加Flatten api?用于将tensor按照给定的连续维度区间展平 #25393
paddle.nn.functional.conv1d(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, data_format='NCL', name=None) 新增conv1d,用于对1维序列特征进行卷积操作。 #26350
paddle.nn.functional.conv_transpose1d(x, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1, data_format='NCL', name=None) 新增conv_transpose1d,用于对1维序列特征进行转置卷积操作。 #26356
paddle.nn.functional.avg_pool1d(x, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None, data_format='NCL', name=None) 1. 新增 API,名称paddle.nn.functional.avg_pool1d;
2. 实现1D平均池化功能
#26331
paddle.nn.functional.avg_pool2d(x, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None, data_format='NCHW', name=None) 1. 新增 API,名称paddle.nn.functional.avg_pool2d,该API从旧API pool2d中拆分而来;
2. 实现2D平均池化功能;
3.?与旧API pool2d相比,去掉pool_type, global_pooling,use_cudnn参数;pool_size , pool_stride, pool_padding参数改为kernel_size, stride,padding; exclusive 参数名字改为counnt_include_pad,作用与exclusive ?作用相反
#26332
paddle.nn.functional.avg_pool3d(x, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None, data_format='NCDHW', name=None) 1. 新增 API,名称paddle.nn.functional.avg_pool3d,该API从旧API pool3d中拆分而来;
2. 实现3D平均池化功能;
3. 与旧API pool3d相比,去掉pool_type, global_pooling,use_cudnn参数;pool_size , pool_stride, pool_padding参数改为kernel_size, stride,padding; exclusive 参数名字改为counnt_include_pad,作用与exclusive ?作用相反
#26333
paddle.nn.functional.max_pool1d(x, kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False, data_format='NCL', name=None) 1. 新增 API,名称paddle.nn.functional.max_pool1d;
2. 实现1D最大池化功能
#26334
paddle.nn.functional.max_pool2d(x, kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False, data_format='NCHW', name=None) 1. 新增 API,名称paddle.nn.functional.max_pool2d,该API从旧API pool2d中拆分而来;
2. 实现2D最大池化功能;
3. 与旧API pool2d相比,去掉pool_type, global_pooling,use_cudnn参数;pool_size , pool_stride, pool_padding参数改为kernel_size, stride,padding;
4. 增加返回最大池化索引的功能,对应参数return_indices
#26335
paddle.nn.functional.max_pool3d(x, kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False, data_format='NCDHW', name=None) 1. 新增 API,名称paddle.nn.functional.max_pool3d,该API从旧API pool3d中拆分而来;
2. 实现3D最大池化功能;
3. 与旧API pool3d相比,去掉pool_type, global_pooling,use_cudnn参数;pool_size , pool_stride, pool_padding参数改为kernel_size, stride,padding;
4. 增加返回最大池化索引的功能,对应参数return_indices
#26336
paddle.nn.functional.adaptive_max_pool1d(x, output_size, return_indices=False, data_format='NCL', name=None) 新增paddle.nn.functional.adaptive_max_pool1d 函数式API。返回输入的自适应最大1d池化。 #26483
paddle.nn.functional.adaptive_max_pool2d(x, output_size, return_indices=False, data_format='NCHW', name=None) 新增paddle.nn.functional.adaptive_max_pool2d 函数式API。返回输入的自适应最大2d池化。 #26483
paddle.nn.functional.adaptive_max_pool3d(x, output_size, return_indices=False, data_format='NCDHW', name=None) 新增paddle.nn.functional.adaptive_max_pool3d 函数式API。返回输入的自适应最大3d池化。 #26483
paddle.nn.functional.adaptive_avg_pool1d(x, output_size, data_format='NCL‘, name=None) 新增paddle.nn.functional.adaptive_avg_pool1d 函数式API。返回输入的自适应平均1d池化。 #26331
paddle.nn.functional.adaptive_avg_pool2d(x, output_size, data_format='NCHW', name=None) 新增paddle.nn.functional.adaptive_avg_pool2d 函数式API。返回输入的自适应平均2d池化。 #26369
paddle.nn.functional.adaptive_avg_pool3d(x, output_size, data_format='NCDHW', name=None) 新增paddle.nn.functional.adaptive_avg_pool3d 函数式API。返回输入的自适应平均3d池化。 #26369
paddle.nn.functional.normalize(x, p=2, dim=1, epsilon=1e-12, name=None) 新增,可使用Lp范数沿指定维度对输入Tensor进行归一化。 #26269
paddle.nn.functional.linear(x,weight,bias=None,name=None) 新增,函数式的linear,对输入数据实现线性变换 #26480
paddle.nn.functional.bilinear(x1,x2, weight,bias=None,name=None) 新增, paddle.nn.Bilinear的内部实现。 #26399

#26610
paddle.nn.functional.dropout2d(x, p=0.5, training=True, data_format='NCHW', name=None) 新增,paddle.nn.functional.dropout2d(x, p=0.5, training=True, name=None),dropout2d实现。 #26111
paddle.nn.functional.dropout3d(x, p=0.5, training=True, data_format='NCDHW', name=None) 新增,paddle.nn.functional.dropout3d(x, p=0.5, training=True, name=None),dropout3d实现。 #26111
paddle.nn.functional.binary_cross_entropy(input, label, weight=None, reduction='mean', name=None) 新增,计算二分类的交叉熵损失函数。 #26012
paddle.nn.functional.binary_cross_entropy_with_logits(input, label, weight=None, reduction='mean', pos_weight=None, name=None) 新增,计算二分类logit和标签的交叉熵损失函数。 #26468
paddle.nn.functional.l1_loss(input, reduction='mean', name=None) 新增函数API,用于计算L1 loss #26040
paddle.nn.functional.nll_loss(x, target, weight=None, ignore_index=-100, reduction='mean', name=None) 新增nll_loss API, 行为与此前NLLLoss的forward函数保持一致 #26019
paddle.nn.functioanl.smooth_l1_loss(input, label, reduction='mean', delta=1.0, name=None) 新增,用于计算smooth L1损失函数
#26398
paddle.nn.fuctional.upsample(x,size=None,scale_factor=None,mode='nearest',align_corners=None,name=None) 1. 参数名称input -> x
2. scale_factor 支持list/tuple
3. 当scale为小数时,对齐torch1.6.0的计算方式
https://github.com/PaddlePaddle/Paddle/pull/26520
paddle.Tensor() 1. API名称从to_variable修改为to_tensor
2. 输入数据类型支持scalar, list, tuple, Tensor, ComplexTensor
3. 返回值类型从core.VarBase修改为Paddle.Tensor
4. 新增参数place用于指定设备为CPU/GPU/固定内存
5. 新增参数stop_gradietn, 可指定是否计算梯度,默认不计算梯度
6. 新增参数dtype, 可指定Tensor类型,非numpy的浮点数默认从paddle.get_default_dtype获取数据类型
7. Paddle.Tensor新增120余种类成员函数。
#26357
paddle.Tensor.tile(*sizes) 新增,根据指定参数扩展输入x的维度 #26290
paddle.distributed.broadcast(tensor, src, group=0) 新增,将指定Tensor广播到所有成员 #26552
paddle.distributed.all_reduce(tensor, op=ReduceOp.SUM, group=0) 新增,所有成员的指定tensor进行归约操作,并返回给所有成员归约的结果。 #26552
paddle.distributed.reduce(tensor, dst, op=ReduceOp.SUM, group=0) 新增,所有进程的指定tensor进行归约操作,并返回给指定成员归约的结果 #26552
paddle.distributed.all_gather(tensor_list, tensor, group=0) 新增,对指定tensor进行聚合操作,并返回聚合的结果 #26552
paddle.distributed.scatter(tensor, tensor_list=None, src=0, group=0) 新增,指定成员的tensor列表分发到其他所有成员中 #26552
paddle.distributed.barrier(group=0) 新增,同步所有成员 #26552
paddle.distribution.Distribution() 1. 暴露Distribution基类
2. 将_to_variable方法改名为_to_tensor
3. 添加probs方法,表示分布的概率密度函数
4. 添加_check_values_dtype_in_probs方法,用来对log_prbs和probs中的输入value进行类型转换
#26355, #26767"
paddle.distribution.Distribution().entropy(self) 暴露Distribution基类的entropy方法 #26355
paddle.distribution.Distribution().log_prob(self, value) 暴露Distribution基类的log_prob方法 #26355
paddle.distribution.Distribution().sample(self) 暴露Distribution基类的sample方法 #26355
paddle.optimizer.Optimizer(learning_rate,parameters=None,weight_decay=None,grad_clip=None,name=None) 1. 暴露Opimizer基类,提供相关的构造函数、成员函数的文档
2. 参数parameters_list变为parameters
3. 参数regularization变为weight_decay,传入None时不使用正则化,传入float类型时,作为L2Decay正则化的系数,同时支持传入L1Decay和L2Decay类型
4. learning_rate支持float类型和LRScheduler类型
#26288
paddle.optimizer.Optimizer.step() 新增step方法,替代动态图下的minimize方法,无输入参数和返回值 #26288
paddle.optimizer.AdamW(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, parameters=None, weight_decay=None, grad_clip=None, lazy_mode=False, name=None) 新增AdamW优化器,weight_decay参数为参数衰减系数,添加参数范围检查 #26288
paddle.io.Dataset() #25558
paddle.io.IterableDataset() 1.新增API,API名称paddle.io.IterableDataset
2.流式数据集基类,流式数据集可继承此基类并实现__iter__函数,可通过paddle.io.DataLoader进行多进程并发加速
#25558
paddle.io.TensorDataset() 1.新增API,API名称paddle.io.TensorDataset
2.张量数据集,沿输入张量的第一维遍历返回各sample数据
#26332
paddle.io.Sampler(data_source=None) 1.新增API,API名称paddle.io.Samper
2.Sampler基类,自定的数据集sampler须继承该基类,可用做paddle.io.BatchSampler输入
#26375
paddle.io.RandomSampler(data_source, replacement=False, num_samples=None, generator=None) 1.新增API,API名称paddle.io.RandomSampler
2.随机采样数据集,返回采样下标序列
#26375
paddle.io.BatchSampler(dataset=None, sampler=None, shuffle=False, batch_size=1, drop_last=False) 1.新增API,API名称paddle.io.SequenceSampler
2.顺序采样数据集,返回采样下标序列
#26375
paddle.io.DistributedBatchSampler(dataset, batch_size, num_replicas=None, rank=None, shuffle=False, drop_last=False) paddle.io.DistributedBatchSampler新增num_replicas和rank参数,可自定义训练设备数和当前G设备逻辑编号 #26315
Clone this wiki locally