#include <execution_provider.h>
Definition at line 60 of file execution_provider.h.
onnxruntime::IExecutionProvider::IExecutionProvider |
( |
const std::string & |
type, |
|
|
bool |
use_metadef_id_creator = false |
|
) |
| |
|
inlineprotected |
onnxruntime::IExecutionProvider::IExecutionProvider |
( |
const std::string & |
type, |
|
|
OrtDevice |
device, |
|
|
bool |
use_metadef_id_creator = false |
|
) |
| |
|
inlineprotected |
virtual onnxruntime::IExecutionProvider::~IExecutionProvider |
( |
| ) |
|
|
virtualdefault |
virtual bool onnxruntime::IExecutionProvider::ConcurrentRunSupported |
( |
| ) |
const |
|
inlinevirtual |
Does the EP support concurrent calls to InferenceSession::Run to execute the model.
Definition at line 304 of file execution_provider.h.
virtual std::vector<AllocatorPtr> onnxruntime::IExecutionProvider::CreatePreferredAllocators |
( |
| ) |
|
|
inlinevirtual |
Create Preferred allocators for the current Execution Provider This function is a stateless function which creates new instances of Allocator, without storing them in EP.
Definition at line 327 of file execution_provider.h.
Generate a unique id that can be used in a MetaDef name. Values are unique for a model instance. The model hash is also returned if you wish to include that in the MetaDef name to ensure uniqueness across models.
- Parameters
-
graph_viewer[in] | Graph viewer that GetCapability was called with. Can be for the main graph or nested graph. |
model_hash[out] | Returns the hash for the main (i.e. top level) graph in the model. This is created using the model path if available, or the model input names and the output names from all nodes in the main graph. |
virtual std::vector<std::unique_ptr<ComputeCapability> > onnxruntime::IExecutionProvider::GetCapability |
( |
const onnxruntime::GraphViewer & |
graph_viewer, |
|
|
const IKernelLookup & |
kernel_lookup |
|
) |
| const |
|
virtual |
Get execution provider's capability for the specified <graph>. Return a bunch of IndexedSubGraphs <*this> execution provider can run if the sub-graph contains only one node or can fuse to run if the sub-graph contains more than one node. The node indexes contained in sub-graphs may have overlap, and it's ONNXRuntime's responsibility to do the partition and decide whether a node will be assigned to <*this> execution provider. For kernels registered in a kernel registry, kernel_lookup
must be used to find a matching kernel for this EP.
virtual void onnxruntime::IExecutionProvider::GetCustomOpDomainList |
( |
std::vector< OrtCustomOpDomain * > & |
| ) |
const |
|
inlinevirtual |
Get provider specific custom op domain list. Provider has the responsibility to release OrtCustomOpDomain instances it creates.
NOTE: In the case of ONNX model having EP specific custom nodes and don't want to ask user to register those nodes, EP might need to a way to register those custom nodes. This API is added for the purpose where EP can use it to leverage ORT custom op to register those custom nodes with one or more custom op domains.
For example, TensorRT EP uses this API to support TRT plugins where each custom op is mapped to TRT plugin and no kernel implementation is needed for custom op since the real implementation is inside TRT. This custom op acts as a role to help pass ONNX model validation.
Definition at line 158 of file execution_provider.h.
virtual std::unique_ptr<onnxruntime::IDataTransfer> onnxruntime::IExecutionProvider::GetDataTransfer |
( |
| ) |
const |
|
inlinevirtual |
Returns a data transfer object that implements methods to copy to and from this device. If no copy is required for the successful operation of this provider, return a nullptr.
Definition at line 86 of file execution_provider.h.
virtual int onnxruntime::IExecutionProvider::GetDeviceId |
( |
| ) |
const |
|
inlinevirtual |
virtual const InlinedVector<const Node*> onnxruntime::IExecutionProvider::GetEpContextNodes |
( |
| ) |
const |
|
inlinevirtual |
Get the array of pointers for EPContext nodes EP needs to implement this if has the requirement to generate the context cache model. Otherwise leave it. Default return an empty vector if not provided by the Execution Provider
Definition at line 334 of file execution_provider.h.
virtual const void* onnxruntime::IExecutionProvider::GetExecutionHandle |
( |
| ) |
const |
|
inlinevirtualnoexcept |
Returns an opaque handle whose exact type varies based on the provider and is interpreted accordingly by the corresponding kernel implementation. For Direct3D operator kernels, this may return an IUnknown supporting QueryInterface to ID3D12GraphicsCommandList1.
Definition at line 166 of file execution_provider.h.
virtual FusionStyle onnxruntime::IExecutionProvider::GetFusionStyle |
( |
| ) |
const |
|
inlinevirtual |
virtual std::shared_ptr<KernelRegistry> onnxruntime::IExecutionProvider::GetKernelRegistry |
( |
| ) |
const |
|
inlinevirtual |
Get kernel registry per execution provider type. The KernelRegistry share pointer returned is shared across sessions.
NOTE: this approach was taken to achieve the following goals,
- The execution provider type based kernel registry should be shared across sessions. Only one copy of this kind of kernel registry exists in ONNXRuntime with multiple sessions/models.
- Adding an execution provider into ONNXRuntime does not need to touch ONNXRuntime framework/session code.
- onnxruntime (framework/session) does not depend on any specific execution provider lib.
Definition at line 134 of file execution_provider.h.
const logging::Logger* onnxruntime::IExecutionProvider::GetLogger |
( |
| ) |
const |
|
inline |
virtual OrtDevice onnxruntime::IExecutionProvider::GetOrtDeviceByMemType |
( |
OrtMemType |
mem_type | ) |
const |
|
inlinevirtual |
virtual DataLayout onnxruntime::IExecutionProvider::GetPreferredLayout |
( |
| ) |
const |
|
inlinevirtual |
virtual ProviderOptions onnxruntime::IExecutionProvider::GetProviderOptions |
( |
| ) |
const |
|
inlinevirtual |
virtual ITuningContext* onnxruntime::IExecutionProvider::GetTuningContext |
( |
| ) |
const |
|
inlinevirtual |
virtual bool onnxruntime::IExecutionProvider::IsGraphCaptured |
( |
| ) |
const |
|
inlinevirtual |
Indicate whether the graph has been captured and instantiated. Currently only CUDA execution provider supports it.
Definition at line 210 of file execution_provider.h.
virtual bool onnxruntime::IExecutionProvider::IsGraphCaptureEnabled |
( |
| ) |
const |
|
inlinevirtual |
Indicate whether the graph capturing mode (e.g., cuda graph) is enabled for the provider. Currently only CUDA execution provider supports it.
Definition at line 204 of file execution_provider.h.
virtual common::Status onnxruntime::IExecutionProvider::OnRunEnd |
( |
bool |
| ) |
|
|
inlinevirtual |
Called when InferenceSession::Run ended NOTE that due to async execution in provider, the actual work of this Run may not be finished on device This function should be regarded as the point that all commands of current Run has been submmited by CPU
Definition at line 198 of file execution_provider.h.
virtual common::Status onnxruntime::IExecutionProvider::OnRunStart |
( |
| ) |
|
|
inlinevirtual |
Called when InferenceSession::Run started NOTE that due to async execution in provider, the actual work of previous Run may not be finished on device This function should be regarded as the point after which a new Run would start to submit commands from CPU
Definition at line 190 of file execution_provider.h.
virtual common::Status onnxruntime::IExecutionProvider::OnSessionInitializationEnd |
( |
| ) |
|
|
inlinevirtual |
Called when session creation is complete This provides an opportunity for execution providers to optionally synchronize and clean up its temporary resources to reduce memory and ensure the first run is fast.
Definition at line 223 of file execution_provider.h.
virtual common::Status onnxruntime::IExecutionProvider::ReplayGraph |
( |
| ) |
|
|
inlinevirtual |
Run the instantiated graph. Currently only CUDA execution provider supports it.
Definition at line 216 of file execution_provider.h.
virtual common::Status onnxruntime::IExecutionProvider::Sync |
( |
| ) |
const |
|
inlinevirtual |
Blocks until the device has completed all preceding requested tasks. Currently this is primarily used by the IOBinding object to ensure that all inputs have been copied to the device before execution begins.
Definition at line 182 of file execution_provider.h.
const std::string& onnxruntime::IExecutionProvider::Type |
( |
| ) |
const |
|
inline |
- Returns
- type of the execution provider; should match that set in the node through the SetExecutionProvider API. Example valid return values are: kCpuExecutionProvider, kCudaExecutionProvider
Definition at line 175 of file execution_provider.h.
const OrtDevice onnxruntime::IExecutionProvider::default_device_ |
|
protected |
The documentation for this class was generated from the following file: