深入浅出PaddlePaddle函数——paddle.to_tensor
创始人
2024-05-30 04:28:04
0

分类目录:《深入浅出PaddlePaddle函数》总目录
相关文章:
· 深入浅出PaddlePaddle函数——paddle.Tensor
· 深入浅出PaddlePaddle函数——paddle.to_tensor


通过已知的data来创建一个Tensor,Tensor类型为paddle.Tensordata可以是scalartuplelistnumpy.ndarraypaddle.Tensor。如果data已经是一个Tensor,且dtypeplace没有发生变化,将不会发生Tensor的拷贝并返回原来的Tensor。 否则会创建一个新的 Tensor,且不保留原来计算图。

语法

paddle.to_tensor(data, dtype=None, place=None, stop_gradient=True)

参数

  • data:[scalar/tuple/list/ndarray/Tensor] 初始化Tensor的数据,可以是scalartuplelistnumpy.ndarraypaddle.Tensor类型。
  • dtype:[可选,str] 创建Tensor的数据类型,可以是boolfloat16float32float64int8int16int32int64uint8complex64complex128。 默认值为None,如果 data为 python 浮点类型,则从get_default_dtype获取类型,如果data为其他类型,则会自动推导类型。
  • place:[可选, CPUPlace/CUDAPinnedPlace/CUDAPlace] 创建Tensor的设备位置,可以是 CPUPlaceCUDAPinnedPlaceCUDAPlace。默认值为None,使用全局的place
  • stop_gradient: [可选,bool] 是否阻断Autograd的梯度传导。默认值为True,此时不进行梯度传传导。

返回值

通过data创建的 Tensor。

实例

import paddletype(paddle.to_tensor(1))
# paddle.to_tensor(1)
# Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,
#        [1])x = paddle.to_tensor(1, stop_gradient=False)
print(x)
# Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=False,
#        [1])paddle.to_tensor(x)  # A new tensor will be created with default stop_gradient=True
# Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,
#        [1])paddle.to_tensor([[0.1, 0.2], [0.3, 0.4]], place=paddle.CPUPlace(), stop_gradient=False)
# Tensor(shape=[2, 2], dtype=float32, place=CPUPlace, stop_gradient=False,
#        [[0.10000000, 0.20000000],
#         [0.30000001, 0.40000001]])type(paddle.to_tensor([[1+1j, 2], [3+2j, 4]], dtype='complex64'))
# paddle.to_tensor([[1+1j, 2], [3+2j, 4]], dtype='complex64')
# Tensor(shape=[2, 2], dtype=complex64, place=CPUPlace, stop_gradient=True,
#        [[(1+1j), (2+0j)],
#         [(3+2j), (4+0j)]])

函数实现

def to_tensor(data, dtype=None, place=None, stop_gradient=True):r"""Constructs a ``paddle.Tensor`` from ``data`` ,which can be scalar, tuple, list, numpy\.ndarray, paddle\.Tensor.If the ``data`` is already a Tensor, copy will be performed and return a new tensor.If you only want to change stop_gradient property, please call ``Tensor.stop_gradient = stop_gradient`` directly.Args:data(scalar|tuple|list|ndarray|Tensor): Initial data for the tensor.Can be a scalar, list, tuple, numpy\.ndarray, paddle\.Tensor.dtype(str|np.dtype, optional): The desired data type of returned tensor. Can be 'bool' , 'float16' ,'float32' , 'float64' , 'int8' , 'int16' , 'int32' , 'int64' , 'uint8','complex64' , 'complex128'. Default: None, infers dtype from ``data``except for python float number which gets dtype from ``get_default_type`` .place(CPUPlace|CUDAPinnedPlace|CUDAPlace|str, optional): The place to allocate Tensor. Can beCPUPlace, CUDAPinnedPlace, CUDAPlace. Default: None, means global place. If ``place`` isstring, It can be ``cpu``, ``gpu:x`` and ``gpu_pinned``, where ``x`` is the index of the GPUs.stop_gradient(bool, optional): Whether to block the gradient propagation of Autograd. Default: True.Returns:Tensor: A Tensor constructed from ``data`` .Examples:.. code-block:: pythonimport paddletype(paddle.to_tensor(1))# paddle.to_tensor(1)# Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,#        [1])x = paddle.to_tensor(1, stop_gradient=False)print(x)# Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=False,#        [1])paddle.to_tensor(x)  # A new tensor will be created with default stop_gradient=True# Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,#        [1])paddle.to_tensor([[0.1, 0.2], [0.3, 0.4]], place=paddle.CPUPlace(), stop_gradient=False)# Tensor(shape=[2, 2], dtype=float32, place=CPUPlace, stop_gradient=False,#        [[0.10000000, 0.20000000],#         [0.30000001, 0.40000001]])type(paddle.to_tensor([[1+1j, 2], [3+2j, 4]], dtype='complex64'))# paddle.to_tensor([[1+1j, 2], [3+2j, 4]], dtype='complex64')# Tensor(shape=[2, 2], dtype=complex64, place=CPUPlace, stop_gradient=True,#        [[(1+1j), (2+0j)],#         [(3+2j), (4+0j)]])"""place = _get_paddle_place(place)if place is None:place = _current_expected_place()if _non_static_mode():return _to_tensor_non_static(data, dtype, place, stop_gradient)# call assign for static graphelse:re_exp = re.compile(r'[(](.+?)[)]', re.S)place_str = re.findall(re_exp, str(place))[0]with paddle.static.device_guard(place_str):return _to_tensor_static(data, dtype, stop_gradient)def full_like(x, fill_value, dtype=None, name=None):"""This function creates a tensor filled with ``fill_value`` which has identical shape of ``x`` and ``dtype``.If the ``dtype`` is None, the data type of Tensor is same with ``x``.Args:x(Tensor): The input tensor which specifies shape and data type. The data type can be bool, float16, float32, float64, int32, int64.fill_value(bool|float|int): The value to fill the tensor with. Note: this value shouldn't exceed the range of the output data type.dtype(np.dtype|str, optional): The data type of output. The data type can be oneof bool, float16, float32, float64, int32, int64. The default value is None, which means the outputdata type is the same as input.name(str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.Returns:Tensor: Tensor which is created according to ``x``, ``fill_value`` and ``dtype``.Examples:.. code-block:: pythonimport paddleinput = paddle.full(shape=[2, 3], fill_value=0.0, dtype='float32', name='input')output = paddle.full_like(input, 2.0)# [[2. 2. 2.]#  [2. 2. 2.]]"""if dtype is None:dtype = x.dtypeelse:if not isinstance(dtype, core.VarDesc.VarType):dtype = convert_np_dtype_to_dtype_(dtype)if in_dygraph_mode():return _C_ops.full_like(x, fill_value, dtype, x.place)if _in_legacy_dygraph():return _legacy_C_ops.fill_any_like(x, 'value', fill_value, 'dtype', dtype)helper = LayerHelper("full_like", **locals())check_variable_and_dtype(x,'x',['bool', 'float16', 'float32', 'float64', 'int16', 'int32', 'int64'],'full_like',)check_dtype(dtype,'dtype',['bool', 'float16', 'float32', 'float64', 'int16', 'int32', 'int64'],'full_like/zeros_like/ones_like',)out = helper.create_variable_for_type_inference(dtype=dtype)helper.append_op(type='fill_any_like',inputs={'X': [x]},attrs={'value': fill_value, "dtype": dtype},outputs={'Out': [out]},)out.stop_gradient = Truereturn out

相关内容

热门资讯

中证A500ETF摩根(560... 8月22日,截止午间收盘,中证A500ETF摩根(560530)涨1.19%,报1.106元,成交额...
A500ETF易方达(1593... 8月22日,截止午间收盘,A500ETF易方达(159361)涨1.28%,报1.104元,成交额1...
何小鹏斥资约2.5亿港元增持小... 每经记者|孙磊    每经编辑|裴健如 8月21日晚间,小鹏汽车发布公告称,公司联...
中证500ETF基金(1593... 8月22日,截止午间收盘,中证500ETF基金(159337)涨0.94%,报1.509元,成交额2...
中证A500ETF华安(159... 8月22日,截止午间收盘,中证A500ETF华安(159359)涨1.15%,报1.139元,成交额...
科创AIETF(588790)... 8月22日,截止午间收盘,科创AIETF(588790)涨4.83%,报0.760元,成交额6.98...
创业板50ETF嘉实(1593... 8月22日,截止午间收盘,创业板50ETF嘉实(159373)涨2.61%,报1.296元,成交额1...
港股异动丨航空股大幅走低 中国... 港股航空股大幅下跌,其中,中国国航跌近7%表现最弱,中国东方航空跌近5%,中国南方航空跌超3%,美兰...
电网设备ETF(159326)... 8月22日,截止午间收盘,电网设备ETF(159326)跌0.25%,报1.198元,成交额409....
红利ETF国企(530880)... 8月22日,截止午间收盘,红利ETF国企(530880)跌0.67%,报1.034元,成交额29.0...