Skip to content

Conversation

@zewenli98
Copy link
Collaborator

@zewenli98 zewenli98 commented Oct 28, 2025

Description

Weak typing behavior in TensorRT is deprecated. However it is a good way to maximize performance. Therefore, we want to create similar PyTorch native system to use with Torch-TensorRT that recovers some of this behavior.

Fixes #3869

Type of change

  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@zewenli98 zewenli98 self-assigned this Oct 28, 2025
@meta-cla meta-cla bot added the cla signed label Oct 28, 2025
@github-actions github-actions bot added component: lowering Issues re: The lowering / preprocessing passes component: conversion Issues re: Conversion stage component: core Issues re: The core compiler component: api [Python] Issues re: Python API component: runtime component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Oct 28, 2025
@github-actions github-actions bot requested a review from apbose October 28, 2025 05:16
@zewenli98 zewenli98 removed the request for review from apbose October 28, 2025 05:16
@github-actions github-actions bot removed the component: conversion Issues re: Conversion stage label Oct 29, 2025
Comment on lines 437 to 444
enable_autocast: bool = _defaults.ENABLE_AUTOCAST,
low_precision_type: Optional[
Union[torch.dtype, dtype]
] = _defaults.LOW_PRECISION_TYPE,
nodes_to_exclude: Collection[str] = _defaults.NODES_TO_EXCLUDE,
targets_to_exclude: Collection[Target] = _defaults.TARGETS_TO_EXCLUDE,
data_max: float = _defaults.DATA_MAX,
max_depth_of_reduction: Optional[int] = _defaults.MAX_DEPTH_OF_REDUCTION,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before merging, these args should be added to other compile functions in this file.

]:
# GEMM: A (M, K) @ B (K, N) = C (M, N)
self.reduction_depth = input_0_dims[-1]
# TODO: Add more reduction ops here
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should any more reduction targets be added?

Comment on lines 374 to 377
assert (
contiguous_inputs[i].dtype == self.input_dtypes[i]
), f"Dtype mismatch for {i}th input({input_name}). Expect {self.input_dtypes[i]}, got {contiguous_inputs[i].dtype}."

Copy link
Collaborator Author

@zewenli98 zewenli98 Oct 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This precision check was removed because after autocasting, if the first layer runs in fp16 but the original input is fp32, input_dtypes will become fp16 but contiguous_inputs is still fp32.

Similarly, other runtimes also removed the check.

Comment on lines 33 to 46
# nodes = list(gm.graph.nodes)
# # insert enter autocast node in the beginning of the graph
# with gm.graph.inserting_before(nodes[0]):
# enter_autocast_node = gm.graph.call_function(torch.amp.autocast_mode._enter_autocast, args=("cuda", torch.float16, True, True))
# enter_autocast_node.meta.update(getattr(nodes[0], "meta", {}))

# # insert exit autocast node before the return node, assuming the return node is the last node
# with gm.graph.inserting_before(nodes[-1]):
# exit_autocast_node = gm.graph.call_function(torch.amp.autocast_mode._exit_autocast, args=(enter_autocast_node,))
# exit_autocast_node.meta.update(getattr(nodes[-1], "meta", {}))

# gm = clean_up_graph_after_modifications(gm)
# gm, new_signature = replace_autocast_with_hop_pass(gm, None)
# logger.debug("Graph after replace_autocast_with_hop_pass:\n%s", gm.graph)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If using Pytorch autocast to wrap the whole model, pytorch will control the precision of each node per the doc and I didn't find a way to customize based on our ruleset.

Copy link
Collaborator

@peri044 peri044 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines 110 to 113
auto expected_type =
util::TRTDataTypeToScalarType(compiled_engine->exec_ctx->getEngine().getTensorDataType(name.c_str()));
TORCHTRT_CHECK(
inputs[i].dtype() == expected_type,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this not necessary now ?

@narendasan
Copy link
Collaborator

narendasan commented Nov 6, 2025

For Tests

  1. Should external autocast in pytorch with strong typing
  2. Whole graph autocast pass
  3. a test case that exercises max_output_threshold fallback

L1 or L2 tests

@github-actions github-actions bot added component: tests Issues re: Tests and removed component: core Issues re: The core compiler labels Nov 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: lowering Issues re: The lowering / preprocessing passes component: runtime component: tests Issues re: Tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants