English | 简体中文
Use paddle in R.
Download Dockerfile
, run
docker build -t paddle-rapi:latest .
First, make sure Python
is installed, assuming that the path is /opt/python3.7
.
python -m pip install paddlepaddle # CPU version
python -m pip install paddlepaddle-gpu # GPU version
Install the R libraries needed to use paddle.
install.packages("reticulate") # call Python in R
install.packages("RcppCNPy") # use numpy.ndarray in R
First, load PaddlePaddle in R.
library(reticulate)
library(RcppCNPy)
use_python("/opt/python3.7/bin/python3.7")
paddle <- import("paddle.fluid.core")
Create an AnalysisConfig
, which is the configuration of the paddle inference engine.
config <- paddle$AnalysisConfig("")
Set model path.
config$set_model("model/__model__", "model/__params__")
Use zero copy inference.
config$switch_use_feed_fetch_ops(FALSE)
config$switch_specify_input_names(TRUE)
Other configuration options and descriptions are as fallows.
config$enable_profile() # turn on inference profile
config$enable_use_gpu(gpu_memory_mb, gpu_id) # use GPU
config$disable_gpu() # disable GPU
config$gpu_device_id() # get GPU id
config$switch_ir_optim(TRUE) # turn on IR optimize(default is TRUE)
config$enable_tensorrt_engine(workspace_size,
max_batch_size,
min_subgraph_size,
paddle$AnalysisConfig$Precision$FLOAT32,
use_static,
use_calib_mode
) # use TensorRT
config$enable_mkldnn() # use MKLDNN
config$delete_pass(pass_name) # delete IR pass
Create inference engine.
predictor <- paddle$create_paddle_predictor(config)
Get input tensor(assume single input), and set input data
input_names <- predictor$get_input_names()
input_tensor <- predictor$get_input_tensor(input_names[1])
input_shape <- as.integer(c(1, 3, 300, 300)) # shape has integer type
input_data <- np_array(data, dtype="float32")$reshape(input_shape)
input_tensor$copy_from_cpu(input_data)
Run inference.
predictor$zero_copy_run()
Get output tensor(assume single output).
output_names <- predictor$get_output_names()
output_tensor <- predictor$get_output_tensor(output_names[1])
Parse output data, and convert to numpy.ndarray
output_data <- output_tensor$copy_to_cpu()
output_data <- np_array(output_data)
Click to see the full R mobilenet example and the corresponding Python mobilenet example the above. For more examples, see R inference example.
Download Dockerfile and example to local directory, and build docker image
docker build -t paddle-rapi:latest .
Create and enter container
docker run --rm -it paddle-rapi:latest bash
Run the following command in th container
cd example
chmod +x mobilenet.r
./mobilenet.r
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》