Skip to content

Conversation

@Hmm-1224
Copy link
Contributor

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Hmm-1224, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request aims to broaden the capabilities of the Keras OpenVINO backend by integrating support for a wider array of NumPy operations. By implementing these functions, the backend can now process more complex numerical computations directly, which is crucial for improving the performance and compatibility of Keras models when deployed with OpenVINO. This change reduces the need for fallback mechanisms and ensures a more seamless execution environment.

Highlights

  • Expanded NumPy Operations Support: This pull request significantly enhances the OpenVINO backend's compatibility by implementing several previously unsupported NumPy operations, including argsort, bincount, diff, dot, empty, full_like, and hstack.
  • Test Exclusions Removed: Corresponding entries for the newly implemented operations have been removed from excluded_concrete_tests.txt, indicating that these functionalities are now correctly handled by the OpenVINO backend.
  • Improved Type Handling for exp: The exp function now includes logic to convert integral input types to float, ensuring correct behavior with the OpenVINO operations.
  • Refined expand_dims Functionality: The expand_dims function has been updated to properly handle axis parameters provided as tuples, improving flexibility.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for several numpy operations to the OpenVINO backend, including argsort, bincount, diff, dot, empty, full_like, and hstack. This is a great contribution to improving the feature parity of the backend. However, the PR title "Support numpy.prod operation #30212" is misleading, as the prod operation is not implemented in this pull request; it still raises a NotImplementedError. Please consider updating the PR title to reflect the actual changes. I've found a few issues in the implementations, including a critical bug in hstack and a potential correctness issue in dot. I've also included some suggestions to improve code maintainability. Please see my detailed comments.

Comment on lines +771 to +777
element_type = None
for x in xs:
if isinstance(x, OpenVINOKerasTensor):
element_type = x.output.get_element_type()
break
xs = [get_ov_output(x, element_type) for x in xs]
xs = _align_operand_types(xs[0], xs[1], "hstack()")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The type alignment for the input tensors in hstack is incorrect. The line xs = _align_operand_types(xs[0], xs[1], "hstack()") only aligns the first two tensors and then overwrites the xs list with a tuple containing only these two aligned tensors. If xs contains more than two tensors, the rest are discarded, leading to incorrect results. All tensors in xs should be aligned to a common data type before concatenation.

Suggested change
element_type = None
for x in xs:
if isinstance(x, OpenVINOKerasTensor):
element_type = x.output.get_element_type()
break
xs = [get_ov_output(x, element_type) for x in xs]
xs = _align_operand_types(xs[0], xs[1], "hstack()")
xs = [get_ov_output(x) for x in xs]
if len(xs) > 1:
result_keras_type = dtypes.result_type(
*[ov_to_keras_type(x.get_element_type()) for x in xs]
)
result_ov_type = OPENVINO_DTYPES[result_keras_type]
aligned_xs = []
for x in xs:
if x.get_element_type() != result_ov_type:
aligned_xs.append(ov_opset.convert(x, result_ov_type).output(0))
else:
aligned_xs.append(x)
xs = aligned_xs

Comment on lines 649 to +660
def dot(x, y):
raise NotImplementedError("`dot` is not supported with openvino backend")
element_type = None
if isinstance(x, OpenVINOKerasTensor):
element_type = x.output.get_element_type()
if isinstance(y, OpenVINOKerasTensor):
element_type = y.output.get_element_type()
x = get_ov_output(x, element_type)
y = get_ov_output(y, element_type)
x, y = _align_operand_types(x, y, "dot()")
if x.get_partial_shape().rank == 0 or y.get_partial_shape().rank == 0:
return OpenVINOKerasTensor(ov_opset.multiply(x, y).output(0))
return OpenVINOKerasTensor(ov_opset.matmul(x, y, False, False).output(0))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The implementation of dot using matmul is not entirely correct for all cases. While matmul and dot are equivalent for 1D and 2D tensors, their behavior differs for higher-dimensional arrays. numpy.dot(a, b) is equivalent to numpy.tensordot(a, b, axes=([-1], [-2])) for N-D arrays, which is different from numpy.matmul's broadcasting behavior. This implementation might lead to incorrect results for arrays with more than 2 dimensions. Since tensordot is not yet implemented for this backend, this might be a temporary solution, but it's worth noting this potential discrepancy.

Comment on lines +416 to +420
shape_x = ov_opset.shape_of(x, "i64").output(0)
rank_x = ov_opset.shape_of(shape_x, "i64").output(0)
rank_x = ov_opset.convert(rank_x, x_type).output(0)
scalar_shape = ov_opset.constant([], x_type).output(0)
rank_x = ov_opset.reshape(rank_x, scalar_shape, False).output(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The calculation of rank_x is a bit complex. You can simplify this by using ov_opset.rank(x) which directly returns the rank of the tensor as a scalar tensor. This would make the code more readable and concise.

Suggested change
shape_x = ov_opset.shape_of(x, "i64").output(0)
rank_x = ov_opset.shape_of(shape_x, "i64").output(0)
rank_x = ov_opset.convert(rank_x, x_type).output(0)
scalar_shape = ov_opset.constant([], x_type).output(0)
rank_x = ov_opset.reshape(rank_x, scalar_shape, False).output(0)
rank_x = ov_opset.rank(x).output(0)
rank_x = ov_opset.convert(rank_x, x_type).output(0)

Comment on lines 663 to +673
def empty(shape, dtype=None):
raise NotImplementedError("`empty` is not supported with openvino backend")
dtype = standardize_dtype(dtype) or config.floatx()
ov_type = OPENVINO_DTYPES[dtype]
if isinstance(shape, tuple):
shape = list(shape)
elif isinstance(shape, int):
shape = [shape]
shape_node = ov_opset.constant(shape, Type.i32).output(0)
const_zero = ov_opset.constant(0, dtype=ov_type).output(0)
empty_tensor = ov_opset.broadcast(const_zero, shape_node).output(0)
return OpenVINOKerasTensor(empty_tensor)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The implementation of empty is almost identical to the zeros function. To reduce code duplication and improve maintainability, you could implement empty by calling zeros. Note that numpy.empty creates an uninitialized array, but creating an array of zeros is a safer and common alternative when uninitialized arrays are not supported.

Suggested change
def empty(shape, dtype=None):
raise NotImplementedError("`empty` is not supported with openvino backend")
dtype = standardize_dtype(dtype) or config.floatx()
ov_type = OPENVINO_DTYPES[dtype]
if isinstance(shape, tuple):
shape = list(shape)
elif isinstance(shape, int):
shape = [shape]
shape_node = ov_opset.constant(shape, Type.i32).output(0)
const_zero = ov_opset.constant(0, dtype=ov_type).output(0)
empty_tensor = ov_opset.broadcast(const_zero, shape_node).output(0)
return OpenVINOKerasTensor(empty_tensor)
def empty(shape, dtype=None):
return zeros(shape, dtype=dtype)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants