Skip to content

Commit 859966c

Browse files
Raahul Kalyaan Jakkameta-codesync[bot]
authored andcommitted
Changing Backend Tensor initialization (meta-pytorch#3484)
Summary: X-link: pytorch/FBGEMM#5056 Pull Request resolved: meta-pytorch#3484 X-link: https://github.com/facebookresearch/FBGEMM/pull/2066 **Context:** Currently, we are enabling SSD optimizer offloading for the ssd tbe kernel **In this diff:** We retrieve the newly added parameters from the tbe config and pass it down to the tbe Differential Revision: D85353134
1 parent f2c544d commit 859966c

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

torchrec/distributed/batched_embedding_kernel.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -246,6 +246,12 @@ def _populate_ssd_tbe_params(config: GroupedEmbeddingConfig) -> Dict[str, Any]:
246246
ssd_tbe_params["kvzch_eviction_tbe_config"] = fused_params.get(
247247
"kvzch_eviction_tbe_config"
248248
)
249+
if "enable_optimizer_offloading" in fused_params:
250+
ssd_tbe_params["enable_optimizer_offloading"] = fused_params.get(
251+
"enable_optimizer_offloading"
252+
)
253+
else:
254+
ssd_tbe_params["enable_optimizer_offloading"] = False
249255

250256
ssd_tbe_params["table_names"] = [table.name for table in config.embedding_tables]
251257

0 commit comments

Comments
 (0)