English | 简体中文
MSAdapter APIs | Constraint conditions |
---|---|
torch.frombuffer | Currently not support require_grad |
torch.multinomial | Currently not support input Generator |
torch.randint | Currently not support input Generator |
torch.randperm | Currently not support input Generator |
torch.imag | Currently not support on GRAPH mode |
torch.max | Currently not support other, Not support on GRAPH mode |
torch.sum | Currently not support on GRAPH mode |
torch.lu | Input get_infos=True currently cannot scan the error, mindspore not support pivot=False |
torch.lu_solve | Input left=False not support |
torch.lstsq | Currently not support return the second result QR |
torch.svd | Currently not support GRAPH mode on Ascend |
torch.nextafter | Currently not support float32 on CPU |
torch.matrix_power | Currently not support n < 0 on GPU |
torch.i0 | Currently not support gradient computation on Ascend, currently not support GRAPH mode on Ascend |
torch.index_add | Not support input of more than 2-D or dim >= 1 |
torch.scatter_reduce | Currently not support reduce ="mean" |
torch.histogramdd | Currently not support float64 input |
torch.asarray | Currently not support input device , copy , requires_grad as configuration |
torch.complex | Currently not support float16 input |
torch.fmin | Currently not support gradient computation, not support GRAPH mode |
torch.kron | Currently not support different complex types for inputs |
torch.sort | Currently not support stable |
torch.float_power | Currently not support complex input |
torch.add | Currently not support both bool type input and return bool output |
torch.nan_to_num | Only support input of dtype float16 and float32 |
torch.polygamma | When n is zero, the result may be wrong |
torch.matmul | Currently not support int type input on GPU |
torch.geqrf | Currently not support input ndim > 2 |
torch.repeat_interleave | Currently not support output_size |
torch.index_reduce | Currently not support reduce ="mean" |
torch.view_as_complex | Currently the output tensor is provided by data copying instead of a view of shared memory |
torch.pad | when padding_mode is 'reflect', not support 5D input |
MSAdapter APIs | Constraint conditions |
---|---|
Tensor.bool | Not support parameter memory_format |
Tensor.expand | Type is constrained, only support Tensor[Float16], Tensor[Float32], Tensor[Int32], Tensor[Int8], Tensor[UInt8] |
Tensor.float | Currently not support memory_format |
Tensor.scatter | Currently not support reduce='mutiply', AscendNot support reduce='add', Not support indices.shape != src.shape |
Tensor.std | Currently not support complex number and float64 input |
Tensor.xlogy | Only support float16 and float32 on Ascend |
Tensor.abs_ | Currently not support on GRAPH mode |
Tensor.absolute_ | Currently not support on GRAPH mode |
Tensor.acos_ | Currently not support on GRAPH mode |
Tensor.arccos_ | Currently not support on GRAPH mode |
Tensor.addr_ | Currently not support on GRAPH mode |
Tensor.add_ | Currently not support on GRAPH mode |
Tensor.addbmm_ | Currently not support on GRAPH mode |
Tensor.addcdiv_ | Currently not support on GRAPH mode |
Tensor.addcmul_ | Currently not support on GRAPH mode |
Tensor.addmm_ | Currently not support on GRAPH mode |
Tensor.addmv_ | Currently not support on GRAPH mode |
Tensor.addr_ | Currently not support on GRAPH mode |
Tensor.asin_ | Currently not support on GRAPH mode |
Tensor.arcsin_ | Currently not support on GRAPH mode |
Tensor.atan_ | Currently not support on GRAPH mode |
Tensor.arctan_ | Currently not support on GRAPH mode |
Tensor.atan2_ | Currently not support on GRAPH mode |
Tensor.arctan2_ | Currently not support on GRAPH mode |
Tensor.baddbmm_ | Currently not support on GRAPH mode |
Tensor.bitwise_not_ | Currently not support on GRAPH mode |
Tensor.bitwise_and_ | Currently not support on GRAPH mode |
Tensor.bitwise_or_ | Currently not support on GRAPH mode |
Tensor.bitwise_xor_ | Currently not support on GRAPH mode |
Tensor.clamp_ | Currently not support on GRAPH mode |
Tensor.clip_ | Currently not support on GRAPH mode |
Tensor.copy_ | Currently not support on GRAPH mode |
Tensor.copysign_ | Currently not support on GRAPH mode |
Tensor.acosh_ | Currently not support on GRAPH mode |
Tensor.arccosh_ | Currently not support on GRAPH mode |
Tensor.cumprod_ | Currently not support on GRAPH mode |
Tensor.div_ | Currently not support on GRAPH mode |
Tensor.divide_ | Currently not support on GRAPH mode |
Tensor.eq_ | Currently not support on GRAPH mode |
Tensor.expm1_ | Currently not support on GRAPH mode |
Tensor.fix_ | Currently not support on GRAPH mode |
Tensor.fill_ | Currently not support on GRAPH mode |
Tensor.float_power_ | Currently not support on GRAPH mode |
Tensor.floor_ | Currently not support on GRAPH mode |
Tensor.fmod_ | Currently not support on GRAPH mode |
Tensor.ge_ | Currently not support on GRAPH mode |
Tensor.greater_equal_ | Currently not support on GRAPH mode |
Tensor.gt_ | Currently not support on GRAPH mode |
Tensor.greater_ | Currently not support on GRAPH mode |
Tensor.hypot_ | Currently not support on GRAPH mode |
Tensor.le_ | Currently not support on GRAPH mode |
Tensor.less_equal_ | Currently not support on GRAPH mode |
Tensor.lgamma_ | Currently not support on GRAPH mode |
Tensor.logical_xor_ | Currently not support on GRAPH mode |
Tensor.lt_ | Currently not support on GRAPH mode |
Tensor.less_ | Currently not support on GRAPH mode |
Tensor.lu | Input get_infos=True currently cannot scan the error, not support pivot=False |
Tensor.lu_solve | Input left=False not support |
Tensor.lstsq | Not support return the second result QR |
Tensor.mul_ | Currently not support on GRAPH mode |
Tensor.multiply_ | Currently not support on GRAPH mode |
Tensor.mvlgamma_ | Currently not support on GRAPH mode |
Tensor.ne_ | Currently not support on GRAPH mode |
Tensor.not_equal_ | Currently not support on GRAPH mode |
Tensor.neg_ | Currently not support on GRAPH mode |
Tensor.negative_ | Currently not support on GRAPH mode |
Tensor.pow_ | Currently not support on GRAPH mode |
Tensor.reciprocal_ | Currently not support on GRAPH mode |
Tensor.renorm_ | Currently not support on GRAPH mode |
Tensor.resize_ | Currently not support on GRAPH mode |
Tensor.round_ | Currently not support on GRAPH mode |
Tensor.sigmoid_ | Currently not support on GRAPH mode |
Tensor.sign_ | Currently not support on GRAPH mode |
Tensor.sin_ | Currently not support on GRAPH mode |
Tensor.sinc_ | Currently not support on GRAPH mode |
Tensor.sinh_ | Currently not support on GRAPH mode |
Tensor.asinh_ | Currently not support on GRAPH mode |
Tensor.square_ | Currently not support on GRAPH mode |
Tensor.sqrt_ | Currently not support on GRAPH mode |
Tensor.squeeze_ | Currently not support on GRAPH mode |
Tensor.sub_ | Currently not support on GRAPH mode |
Tensor.tan_ | Currently not support on GRAPH mode |
Tensor.tanh_ | Currently not support on GRAPH mode |
Tensor.atanh_ | Currently not support on GRAPH mode |
Tensor.arctanh_ | Currently not support on GRAPH mode |
Tensor.transpose_ | Currently not support on GRAPH mode |
Tensor.trunc_ | Currently not support on GRAPH mode |
Tensor.unsqueeze_ | Currently not support on GRAPH mode |
Tensor.zero_ | Currently not support on GRAPH mode |
Tensor.svd | Currently not support GRAPH mode on Ascend |
Tensor.nextafter | Currently not support float32 on CPU |
Tensor.matrix_power | Currently not support n < 0 on GPU |
Tensor.i0 | Currently not support gradient computation on Ascend, currently not support GRAPH mode on Ascend |
Tensor.index_add | Not support input of more than 2-D or dim >= 1 |
Tensor.nan_to_num | Only support input of dtype float16 and float32 |
Tensor.nextafter_ | Currently not support float32 on CPU |
Tensor.fmin | Currently not support gradient computation, not support GRAPH mode |
Tensor.imag | Currently not support on GRAPH mode |
Tensor.scatter_reduce | Currently not support reduce ="mean" |
Tensor.scatter_reduce_ | Currently not support reduce ="mean" and GRAPH mode |
Tensor.neg | Currently not support uint32, uint64 |
Tensor.add | Currently not support both bool type input and return bool output |
Tensor.polygamma | When n is zero, the result may be wrong |
Tensor.matmul | Currently not support int type input on GPU |
Tensor.geqrf | Currently not support input ndim > 2 |
Tensor.repeat_interleave | Currently not support output_size |
Tensor.index_reduce | Currently not support reduce ="mean" |
Tensor.index_reduce_ | Currently not support reduce ="mean" and GRAPH mode |
Tensor.masked_scatter | Currently not support on GPU, or input to be broadcasted to the shape of mask |
MSAdapter APIs | Constraint conditions |
---|---|
nn.LPPool1d | Not support float64 |
nn.LPPool2d | Not support 3D, kernelsizeNot support tuple, Not support float64 |
nn.AdaptiveMaxPool3d | Not support float64 |
nn.AdaptiveAvgPool1d | Not support 2D |
nn.AdaptiveAvgPool2d | Not support 3D |
nn.AdaptiveAvgPool3d | Not support 4D, Not support float64 |
nn.ReflectionPad2d | Not support complex32 |
nn.ReplicationPad2d | Not support 3D |
nn.ELU | only support Alpha = 1.0 |
nn.Hardshrink | Not support float64 |
nn.Hardtanh | Not support float64 |
nn.Hardswish | Not support float64 |
nn.LeakyReLU | Not support float64 |
nn.PReLU | Not support float64 |
nn.ReLU6 | Not support float64 |
nn.RReLU | inplace not support GRAPH mode |
nn.SELU | inplace not support GRAPH mode |
nn.CELU | inplace not support GRAPH mode |
nn.Mish | inplace not support GRAPH mode |
nn.Threshold | inplace not support GRAPH mode |
nn.LogSoftmax | Not support float64, Not support 8D and higher dimension |
nn.BatchNorm1d | Not support 3D |
nn.Linear | device, dtype parameter Not support |
nn.HingeEmbeddingLoss | Not support int as input |
nn.UpsamplingNearest2d | Not support size=None |
nn.Conv1d | 1.Not support 2D; 2. A part of parameter not support tuple |
nn.Conv3d | 1.Not support complex number; 2.groups only support 1 on Ascend |
nn.ConvTranspose1d | 1.Not support 2D; 2. A part of parameter not support tuple |
nn.ConvTranspose2d | Not support 3D input |
nn.ConvTranspose3d | Not support 4D input |
nn.AdaptiveLogSoftmaxWithLoss | Not support GRAPH mode |
nn.LSTM | Currently proj_size not support |
nn.KLDivLoss | Currently log_target can not support True |
MSAdapter APIs | Constraint conditions |
---|---|
functional.lp_pool1d | Not support float64 |
functional.lp_pool2d | Not support float64 |
functional.prelu | inplace not support GRAPH mode |
functional.rrelu | inplace not support GRAPH mode |
functional.softmax | Not support _stacklevel |
functional.log_softmax | Not support float64, not support _stacklevel |
functional.dropout1d | inplace not support GRAPH mode |
functional.dropout2d | inplace not support GRAPH mode |
functional.dropout3d | inplace not support GRAPH mode |
functional.conv3d | groups only support 1 on Ascend |
functional.upsample_bilinear | Input tensor must be a 4-D tensor |
functional.interpolate | recompute_scale_factor and antialias not support. it only supported the following 3 modes. 'nearest' only support 4D or 5D input, 'bilinear'only support 4D input, 'linear' only support 3D input |
functional.kl_div | Currently log_target can not support True |
MSAdapter APIs | Constraint conditions |
---|---|
lu | Mindspore not support pivot=False, only support square matrix as input |
lu_solve | Input left=False not support |
lu_factor | only support square matrix as input |
lu_factor_ex | Input get_infos=True currently cannot scan the error, mindspore not support pivot=False |
lstsq | Currently not support on GRAPH mode |
eigvals | Currently not support GRAPH mode on GPU, not support gradient computation |
svd | driver only support None as input, not support gradient computation on Ascend, currently not support GRAPH mode on Ascend |
svdvals | driver only support None as input, not support gradient computation on Ascend, currently not support on GRAPH mode |
norm | Currently not support complex input, ord not support float input, not support ord is nuclear norm, float('inf') or int on Ascend |
vector_norm | Currently not support complex input, ord not support float input |
matrix_power | Currently not support n < 0 on GPU |
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》