Skip to content

Commit e695cd3

Browse files
georgen117zhaoyang-intelchethanpk
authored
Dnnl refactor (#8627)
* dnnl ep rework rework DnnlTensor,DnnlNode,DnnlSubgraph to support arbitrary graph topology and tensor data types rework GetCapability to claim nodes in graph greedily from node topological ordering and delay creation of DnnlSubgraph until Compile rework compile to have DnnlSubgraphPrimitive as the object to handle primitive creation and execution instead of thread local primitive pool which duplicates intermediate memory allocated by the EP across threads DnnlSubgraphPrimitive provides helpers to handle many common functions for each dnnl primitive builder and become the centralized place to store input, output, intermediate memories, initializer memories and etc it provides functions to obtain input memories with automatic reordering/reshaping and moving between engines it provides interfaces to add primitive, set output memory for single node and etc add CONCURRENT_EXEC compile flag for dnnl library as without it, convolution primitive cannot be created and executed on different threads enable unit tests to run on dnnl ep as well if built with dnnl ep add dnnl ep support for Matmulinteger * Add Relu to the DNNL refactor Signed-off-by: George Nash <[email protected]> * Add Convolution op to the DNNL rework Signed-off-by: George Nash <[email protected]> * Add Pooling ops to the DNNL rework This adds the following ops: - AveragePool - GlobalAveragePool - GlobalMaxPool - MaxPool Note: Pooling with dilation is not yet supported. Note: GlobalLpPool, LpPool, MaxRoiPool, and MaxUnpool are not supported yet. Signed-off-by: George Nash <[email protected]> * Add Sum op to the DNNL rework Signed-off-by: George Nash <[email protected]> * Add ConvGrad op to the DNNL rework Signed-off-by: George Nash <[email protected]> * Add MaxPoolGrad and AveragePoolGrad ops to DNNL rework Signed-off-by: George Nash <[email protected]> * Added lrn operator to the refactored code Signed-off by [email protected] * Added ReduceMean DNNL op to the refactor code Signed-off-by: Chethan Palangotu Keshava <[email protected]> * Added Softmax DNNL op for the refactored code Signed-off-by: Chethan Palangotu Keshava <[email protected]> * Added BatchNorm DNNL op inference-only for refactored code Signed-off-by: Chethan Palangotu Keshava <[email protected]> * Added Binary Ops to DNNL rework Signed-off-by: Wang <[email protected]> * Added ReluGrad to DNNL Rework Signed-off-by: Wang <[email protected]> * Update OneDNN tag to v2.3 Signed-off-by: Wang <[email protected]> * Added support for memory upto dim size 12 this is to fix the CI test cases that contain binary ops of input dim size > 5 Signed-off-by: Wang <[email protected]> * Prevent claiming support for float16 and bfloat16 when only float is suppoted By using The string.find used was causing the code to claiming support for float16 and bfloat16 when we only supported float. We now explicitly check the code for the data type or the data type with a 7 letter prefix basically prefixed with "tensor(" Signed-off-by: George Nash <[email protected]> * Disable uint8 mul and div, improve type conversion Disable mul_uint8 and div_uint8 test cases as they use modulo for overflow handling while onednn uses saturation improve ype conversion using enum instead of string comparsion as well as adding more types Signed-off-by: Wang <[email protected]> Co-authored-by: Wang <[email protected]> Co-authored-by: Chethan Palangotu Keshava <[email protected]>
1 parent f04a235 commit e695cd3

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

52 files changed

+3829
-6748
lines changed

cmake/external/dnnl.cmake

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ include (ExternalProject)
22

33
set(DNNL_URL https://github.com/oneapi-src/onednn)
44
# If DNNL_TAG is updated, check if MKLML_VERSION and platform.cmake.patch need to be updated.
5-
set(DNNL_TAG v2.2)
5+
set(DNNL_TAG v2.3)
66

77
if(WIN32)
88
set(DNNL_SHARED_LIB dnnl.dll)
@@ -51,7 +51,7 @@ if (onnxruntime_USE_DNNL)
5151
GIT_TAG ${DNNL_TAG}
5252
# PATCH_COMMAND ${MKLDNN_PATCH_DISCARD_COMMAND} COMMAND ${DNNL_PATCH_COMMAND}
5353
SOURCE_DIR ${DNNL_SOURCE}
54-
CMAKE_ARGS -DDNNL_BUILD_TESTS=OFF -DDNNL_BUILD_EXAMPLES=OFF -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} -DCMAKE_INSTALL_PREFIX=${DNNL_INSTALL} ${DNNL_GPU_CMAKE_ARGS}
54+
CMAKE_ARGS -DDNNL_BUILD_TESTS=OFF -DDNNL_ENABLE_CONCURRENT_EXEC=ON -DDNNL_BUILD_EXAMPLES=OFF -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} -DCMAKE_INSTALL_PREFIX=${DNNL_INSTALL} ${DNNL_GPU_CMAKE_ARGS}
5555
)
5656
link_directories(${DNNL_LIB_DIR})
5757
endif()

onnxruntime/core/providers/dnnl/dnnl_common.h

Lines changed: 0 additions & 117 deletions
This file was deleted.

0 commit comments

Comments
 (0)