-
Notifications
You must be signed in to change notification settings - Fork 19.7k
Support numpy.prod operation #30212 #21873
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @Hmm-1224, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request aims to broaden the capabilities of the Keras OpenVINO backend by integrating support for a wider array of NumPy operations. By implementing these functions, the backend can now process more complex numerical computations directly, which is crucial for improving the performance and compatibility of Keras models when deployed with OpenVINO. This change reduces the need for fallback mechanisms and ensures a more seamless execution environment. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for several numpy operations to the OpenVINO backend, including argsort, bincount, diff, dot, empty, full_like, and hstack. This is a great contribution to improving the feature parity of the backend. However, the PR title "Support numpy.prod operation #30212" is misleading, as the prod operation is not implemented in this pull request; it still raises a NotImplementedError. Please consider updating the PR title to reflect the actual changes. I've found a few issues in the implementations, including a critical bug in hstack and a potential correctness issue in dot. I've also included some suggestions to improve code maintainability. Please see my detailed comments.
| element_type = None | ||
| for x in xs: | ||
| if isinstance(x, OpenVINOKerasTensor): | ||
| element_type = x.output.get_element_type() | ||
| break | ||
| xs = [get_ov_output(x, element_type) for x in xs] | ||
| xs = _align_operand_types(xs[0], xs[1], "hstack()") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The type alignment for the input tensors in hstack is incorrect. The line xs = _align_operand_types(xs[0], xs[1], "hstack()") only aligns the first two tensors and then overwrites the xs list with a tuple containing only these two aligned tensors. If xs contains more than two tensors, the rest are discarded, leading to incorrect results. All tensors in xs should be aligned to a common data type before concatenation.
| element_type = None | |
| for x in xs: | |
| if isinstance(x, OpenVINOKerasTensor): | |
| element_type = x.output.get_element_type() | |
| break | |
| xs = [get_ov_output(x, element_type) for x in xs] | |
| xs = _align_operand_types(xs[0], xs[1], "hstack()") | |
| xs = [get_ov_output(x) for x in xs] | |
| if len(xs) > 1: | |
| result_keras_type = dtypes.result_type( | |
| *[ov_to_keras_type(x.get_element_type()) for x in xs] | |
| ) | |
| result_ov_type = OPENVINO_DTYPES[result_keras_type] | |
| aligned_xs = [] | |
| for x in xs: | |
| if x.get_element_type() != result_ov_type: | |
| aligned_xs.append(ov_opset.convert(x, result_ov_type).output(0)) | |
| else: | |
| aligned_xs.append(x) | |
| xs = aligned_xs |
keras/src/backend/openvino/numpy.py
Outdated
| def dot(x, y): | ||
| raise NotImplementedError("`dot` is not supported with openvino backend") | ||
| element_type = None | ||
| if isinstance(x, OpenVINOKerasTensor): | ||
| element_type = x.output.get_element_type() | ||
| if isinstance(y, OpenVINOKerasTensor): | ||
| element_type = y.output.get_element_type() | ||
| x = get_ov_output(x, element_type) | ||
| y = get_ov_output(y, element_type) | ||
| x, y = _align_operand_types(x, y, "dot()") | ||
| if x.get_partial_shape().rank == 0 or y.get_partial_shape().rank == 0: | ||
| return OpenVINOKerasTensor(ov_opset.multiply(x, y).output(0)) | ||
| return OpenVINOKerasTensor(ov_opset.matmul(x, y, False, False).output(0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation of dot using matmul is not entirely correct for all cases. While matmul and dot are equivalent for 1D and 2D tensors, their behavior differs for higher-dimensional arrays. numpy.dot(a, b) is equivalent to numpy.tensordot(a, b, axes=([-1], [-2])) for N-D arrays, which is different from numpy.matmul's broadcasting behavior. This implementation might lead to incorrect results for arrays with more than 2 dimensions. Since tensordot is not yet implemented for this backend, this might be a temporary solution, but it's worth noting this potential discrepancy.
| shape_x = ov_opset.shape_of(x, "i64").output(0) | ||
| rank_x = ov_opset.shape_of(shape_x, "i64").output(0) | ||
| rank_x = ov_opset.convert(rank_x, x_type).output(0) | ||
| scalar_shape = ov_opset.constant([], x_type).output(0) | ||
| rank_x = ov_opset.reshape(rank_x, scalar_shape, False).output(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The calculation of rank_x is a bit complex. You can simplify this by using ov_opset.rank(x) which directly returns the rank of the tensor as a scalar tensor. This would make the code more readable and concise.
| shape_x = ov_opset.shape_of(x, "i64").output(0) | |
| rank_x = ov_opset.shape_of(shape_x, "i64").output(0) | |
| rank_x = ov_opset.convert(rank_x, x_type).output(0) | |
| scalar_shape = ov_opset.constant([], x_type).output(0) | |
| rank_x = ov_opset.reshape(rank_x, scalar_shape, False).output(0) | |
| rank_x = ov_opset.rank(x).output(0) | |
| rank_x = ov_opset.convert(rank_x, x_type).output(0) |
| def empty(shape, dtype=None): | ||
| raise NotImplementedError("`empty` is not supported with openvino backend") | ||
| dtype = standardize_dtype(dtype) or config.floatx() | ||
| ov_type = OPENVINO_DTYPES[dtype] | ||
| if isinstance(shape, tuple): | ||
| shape = list(shape) | ||
| elif isinstance(shape, int): | ||
| shape = [shape] | ||
| shape_node = ov_opset.constant(shape, Type.i32).output(0) | ||
| const_zero = ov_opset.constant(0, dtype=ov_type).output(0) | ||
| empty_tensor = ov_opset.broadcast(const_zero, shape_node).output(0) | ||
| return OpenVINOKerasTensor(empty_tensor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation of empty is almost identical to the zeros function. To reduce code duplication and improve maintainability, you could implement empty by calling zeros. Note that numpy.empty creates an uninitialized array, but creating an array of zeros is a safer and common alternative when uninitialized arrays are not supported.
| def empty(shape, dtype=None): | |
| raise NotImplementedError("`empty` is not supported with openvino backend") | |
| dtype = standardize_dtype(dtype) or config.floatx() | |
| ov_type = OPENVINO_DTYPES[dtype] | |
| if isinstance(shape, tuple): | |
| shape = list(shape) | |
| elif isinstance(shape, int): | |
| shape = [shape] | |
| shape_node = ov_opset.constant(shape, Type.i32).output(0) | |
| const_zero = ov_opset.constant(0, dtype=ov_type).output(0) | |
| empty_tensor = ov_opset.broadcast(const_zero, shape_node).output(0) | |
| return OpenVINOKerasTensor(empty_tensor) | |
| def empty(shape, dtype=None): | |
| return zeros(shape, dtype=dtype) |
No description provided.