Commit e695cd3
Dnnl refactor (#8627)
* dnnl ep rework
rework DnnlTensor,DnnlNode,DnnlSubgraph to support arbitrary graph topology and tensor data types
rework GetCapability to claim nodes in graph greedily from node topological ordering and delay creation of DnnlSubgraph until Compile
rework compile to have DnnlSubgraphPrimitive as the object to handle primitive creation and execution
instead of thread local primitive pool which duplicates intermediate memory allocated by the EP across threads
DnnlSubgraphPrimitive provides helpers to handle many common functions for each dnnl primitive builder and become the centralized place to store input, output, intermediate memories, initializer memories and etc
it provides functions to obtain input memories with automatic reordering/reshaping and moving between engines
it provides interfaces to add primitive, set output memory for single node and etc
add CONCURRENT_EXEC compile flag for dnnl library as without it, convolution primitive cannot be created and executed on different threads
enable unit tests to run on dnnl ep as well if built with dnnl ep
add dnnl ep support for Matmulinteger
* Add Relu to the DNNL refactor
Signed-off-by: George Nash <[email protected]>
* Add Convolution op to the DNNL rework
Signed-off-by: George Nash <[email protected]>
* Add Pooling ops to the DNNL rework
This adds the following ops:
- AveragePool
- GlobalAveragePool
- GlobalMaxPool
- MaxPool
Note: Pooling with dilation is not yet supported.
Note: GlobalLpPool, LpPool, MaxRoiPool, and MaxUnpool are not supported yet.
Signed-off-by: George Nash <[email protected]>
* Add Sum op to the DNNL rework
Signed-off-by: George Nash <[email protected]>
* Add ConvGrad op to the DNNL rework
Signed-off-by: George Nash <[email protected]>
* Add MaxPoolGrad and AveragePoolGrad ops to DNNL rework
Signed-off-by: George Nash <[email protected]>
* Added lrn operator to the refactored code
Signed-off by [email protected]
* Added ReduceMean DNNL op to the refactor code
Signed-off-by: Chethan Palangotu Keshava <[email protected]>
* Added Softmax DNNL op for the refactored code
Signed-off-by: Chethan Palangotu Keshava <[email protected]>
* Added BatchNorm DNNL op inference-only for refactored code
Signed-off-by: Chethan Palangotu Keshava <[email protected]>
* Added Binary Ops to DNNL rework
Signed-off-by: Wang <[email protected]>
* Added ReluGrad to DNNL Rework
Signed-off-by: Wang <[email protected]>
* Update OneDNN tag to v2.3
Signed-off-by: Wang <[email protected]>
* Added support for memory upto dim size 12
this is to fix the CI test cases that contain binary ops of input dim
size > 5
Signed-off-by: Wang <[email protected]>
* Prevent claiming support for float16 and bfloat16 when only float is suppoted
By using The string.find used was causing the code to claiming support
for float16 and bfloat16 when we only supported float. We now explicitly
check the code for the data type or the data type with a 7 letter prefix
basically prefixed with "tensor("
Signed-off-by: George Nash <[email protected]>
* Disable uint8 mul and div, improve type conversion
Disable mul_uint8 and div_uint8 test cases as they use modulo for
overflow handling while onednn uses saturation
improve ype conversion using enum instead of string comparsion as well
as adding more types
Signed-off-by: Wang <[email protected]>
Co-authored-by: Wang <[email protected]>
Co-authored-by: Chethan Palangotu Keshava <[email protected]>1 parent f04a235 commit e695cd3
File tree
52 files changed
+3829
-6748
lines changed- cmake/external
- onnxruntime
- core/providers/dnnl
- subgraph
- test/providers
- cpu
- nn
- reduction
- orttraining/orttraining/test/gradient
Some content is hidden
Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
52 files changed
+3829
-6748
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
2 | 2 | | |
3 | 3 | | |
4 | 4 | | |
5 | | - | |
| 5 | + | |
6 | 6 | | |
7 | 7 | | |
8 | 8 | | |
| |||
51 | 51 | | |
52 | 52 | | |
53 | 53 | | |
54 | | - | |
| 54 | + | |
55 | 55 | | |
56 | 56 | | |
57 | 57 | | |
This file was deleted.
0 commit comments