Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 18 additions & 18 deletions src/visualizr/anitalker/choices.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,35 +5,35 @@

class TrainMode(Enum):
# manipulate mode = training the classifier
manipulate = "manipulate"
manipulate: str = "manipulate"
# default training mode!
diffusion = "diffusion"
diffusion: str = "diffusion"


class ModelType(Enum):
"""Kinds of the backbone models."""

# unconditional ddpm
ddpm = "ddpm"
ddpm: str = "ddpm"
# autoencoding ddpm cannot do unconditional generation
autoencoder = "autoencoder"
autoencoder: str = "autoencoder"

def has_autoenc(self):
def has_autoenc(self) -> bool:
return self in [ModelType.autoencoder]


class ModelName(Enum):
"""List of all supported model classes."""

beatgans_ddpm = "beatgans_ddpm"
beatgans_autoenc = "beatgans_autoenc"
beatgans_ddpm: str = "beatgans_ddpm"
beatgans_autoenc: str = "beatgans_autoenc"


class ModelMeanType(Enum):
"""Which type of output the model predicts."""

# the model predicts epsilon
eps = "eps"
eps: str = "eps"


class ModelVarType(Enum):
Expand All @@ -45,29 +45,29 @@ class ModelVarType(Enum):
"""

# posterior beta_t
fixed_small = "fixed_small"
fixed_small: str = "fixed_small"
# beta_t
fixed_large = "fixed_large"
fixed_large: str = "fixed_large"


class LossType(Enum):
# use raw MSE loss and KL when learning variances
mse = "mse"
mse: str = "mse"


class GenerativeType(Enum):
"""where how a sample is generated."""

ddpm = "ddpm"
ddim = "ddim"
ddpm: str = "ddpm"
ddim: str = "ddim"


class Activation(Enum):
none = "none"
relu = "relu"
lrelu = "lrelu"
silu = "silu"
tanh = "tanh"
none: str = "none"
relu: str = "relu"
lrelu: str = "lrelu"
silu: str = "silu"
tanh: str = "tanh"

def get_act(self) -> Identity | ReLU | LeakyReLU | SiLU | Tanh:
match self:
Expand Down
43 changes: 17 additions & 26 deletions src/visualizr/anitalker/config.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
from dataclasses import dataclass
from typing import Literal
from typing import Literal, Self
Copy link

Copilot AI Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'Literal' is not used.

Suggested change
from typing import Literal, Self
from typing import Self

Copilot uses AI. Check for mistakes.

Comment on lines +2 to 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Fix Optional typing for fields that default to None.

These fields are annotated as non-Optional but initialized with None, which breaks strict type checking.

-from typing import Literal, Self
+from typing import Literal, Self
@@
-    infer_type: InferenceType = None
+    infer_type: InferenceType | None = None
@@
-    model_conf: ModelConfig = None
+    model_conf: ModelConfig | None = None
@@
-    decoder_layers: int = None
-    motion_dim: int = None
-    mfcc: bool = None
-    face_scale: bool = None
-    face_location: bool = None
+    decoder_layers: int | None = None
+    motion_dim: int | None = None
+    mfcc: bool | None = None
+    face_scale: bool | None = None
+    face_location: bool | None = None

Also, confirm project Python version; typing.Self requires Python 3.11+. If supporting 3.10, replace with -> "TrainConfig".

Also applies to: 30-33, 50-53, 92-97



---

🏁 Script executed:

```shell
#!/bin/bash
# Verify the config.py file content and field type annotations
cat src/visualizr/anitalker/config.py

Length of output: 9377


🏁 Script executed:

#!/bin/bash
# Check Python version requirement in project config
fd -e pyproject.toml -e setup.py -e setup.cfg | head -5 | xargs cat 2>/dev/null

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check if there's a Python version constraint in README or .python-version
fd -e README.md -e .python-version | xargs cat 2>/dev/null | grep -i python

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Search for pyproject.toml or setup.py anywhere in repo
find . -maxdepth 3 -type f \( -name "pyproject.toml" -o -name "setup.py" -o -name "setup.cfg" \) 2>/dev/null | head -10

Length of output: 16


🏁 Script executed:

#!/bin/bash
# Check common root files for Python version info
ls -la | grep -E "^-.*\.(toml|py|cfg|txt|yml|yaml)$"

Length of output: 324


🏁 Script executed:

#!/bin/bash
# Attempt to find any Python version constraints via broader search
rg -i "python.*3\.[0-9]+" -B2 -A2 | head -30

Length of output: 1510


Import Self from typing_extensions or use string literal for Python 3.10 compatibility; add Optional types to 7 fields defaulting to None.

The project requires Python 3.10 (requires-python = "==3.10.*"), but typing.Self was introduced in Python 3.11 (PEP 673). This will cause an ImportError at runtime. Additionally, seven fields are annotated as non-Optional but initialized with None, breaking strict type checking:

-from typing import Literal, Self
+from typing import Literal
+from typing_extensions import Self

Then update these field annotations:

-    infer_type: InferenceType = None
+    infer_type: InferenceType | None = None
-    model_conf: ModelConfig = None
+    model_conf: ModelConfig | None = None
-    decoder_layers: int = None
-    motion_dim: int = None
-    mfcc: bool = None
-    face_scale: bool = None
-    face_location: bool = None
+    decoder_layers: int | None = None
+    motion_dim: int | None = None
+    mfcc: bool | None = None
+    face_scale: bool | None = None
+    face_location: bool | None = None

Also applies to any other occurrences in lines 30–33, 50–53, 92–97.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from typing import Literal, Self
from typing import Literal
from typing_extensions import Self
🤖 Prompt for AI Agents
In src/visualizr/anitalker/config.py around lines 2-3 (and also update
occurrences at lines 30–33, 50–53, 92–97): the file currently imports Self from
typing (Python 3.11-only) and declares seven fields as non-Optional while
defaulting them to None; replace the import by importing Self from
typing_extensions (or change annotations to the string literal "Self") to retain
Python 3.10 compatibility, and update the seven field type hints that are
assigned None to Optional[...] (e.g., change Type -> Optional[Type]) so their
annotations match their default values; apply the same Optional fixes to any
similar field annotations at the other listed line ranges.

from visualizr.anitalker.choices import (
Activation,
Expand All @@ -24,17 +24,12 @@
ModelConfig,
)
from visualizr.anitalker.model.latentnet import LatentNetType, MLPSkipNetConfig
from visualizr.app.types import InferenceType


@dataclass
class TrainConfig(BaseConfig):
infer_type: Literal[
"mfcc_full_control",
"mfcc_pose_only",
"hubert_pose_only",
"hubert_audio_only",
"hubert_full_control",
] = None
infer_type: InferenceType = None
# random seed
seed: int = 0
train_mode: TrainMode = TrainMode.diffusion
Expand Down Expand Up @@ -94,19 +89,22 @@ class TrainConfig(BaseConfig):
T: int = 1_000
# to be overridden
name: str = ""
decoder_layers = None
motion_dim = None
decoder_layers: int = None
motion_dim: int = None
mfcc: bool = None
face_scale: bool = None
face_location: bool = None

def __post_init__(self):
def __post_init__(self) -> None:
self.batch_size_eval = self.batch_size_eval or self.batch_size
self.data_val_name = self.data_val_name or self.data_name

def scale_up_gpus(self, num_gpus, num_nodes=1):
def scale_up_gpus(self, num_gpus: int, num_nodes: int = 1) -> Self:
self.batch_size *= num_gpus * num_nodes
self.batch_size_eval *= num_gpus * num_nodes
return self

def _make_diffusion_conf(self, t: int):
def _make_diffusion_conf(self, t: int) -> SpacedDiffusionBeatGansConfig:
if self.diffusion_type != "beatgans":
raise NotImplementedError
# can use t < `self.t` for evaluation
Expand All @@ -132,21 +130,21 @@ def _make_diffusion_conf(self, t: int):
)

@property
def model_out_channels(self):
def model_out_channels(self) -> int:
return 3

def make_t_sampler(self) -> UniformSampler:
if self.T_sampler != "uniform":
raise NotImplementedError
return UniformSampler(self.T)

def make_diffusion_conf(self):
def make_diffusion_conf(self) -> SpacedDiffusionBeatGansConfig:
return self._make_diffusion_conf(self.T)

def make_eval_diffusion_conf(self):
def make_eval_diffusion_conf(self) -> SpacedDiffusionBeatGansConfig:
return self._make_diffusion_conf(self.T_eval)

def make_model_conf(self):
def make_model_conf(self) -> BeatGANsAutoencConfig | BeatGANsUNetConfig:
if self.model_name == ModelName.beatgans_ddpm:
self.model_type = ModelType.ddpm
self.model_conf = BeatGANsUNetConfig(
Expand All @@ -172,16 +170,9 @@ def make_model_conf(self):
resnet_two_cond=self.net_beatgans_resnet_two_cond,
resnet_use_zero_module=self.net_beatgans_resnet_use_zero_module,
)
elif self.model_name in [
ModelName.beatgans_autoenc,
]:
elif self.model_name == ModelName.beatgans_autoenc:
cls = BeatGANsAutoencConfig
# supports both autoenc and vaeddpm
if self.model_name == ModelName.beatgans_autoenc:
self.model_type = ModelType.autoencoder
else:
raise NotImplementedError

self.model_type = ModelType.autoencoder
if self.net_latent_net_type == LatentNetType.none:
latent_net_conf = None
elif self.net_latent_net_type == LatentNetType.skip:
Expand Down
23 changes: 11 additions & 12 deletions src/visualizr/anitalker/config_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,41 +17,40 @@ class BaseConfig:
and serialize/deserialize configurations to/from JSON.
"""

def inherit(self, another):
def inherit(self, another) -> None:
"""Inherit common keys from a given config."""
common_keys = set(self.__dict__.keys()) & set(another.__dict__.keys())
common_keys: set[str] = set(self.__dict__.keys()) & set(another.__dict__.keys())
for k in common_keys:
setattr(self, k, getattr(another, k))

def propagate(self):
def propagate(self) -> None:
"""Push down the configuration to all members."""
for _, v in self.__dict__.items():
if isinstance(v, BaseConfig):
v.inherit(self)
v.propagate()

def from_dict(self, config_dict, strict=False):
def from_dict(self, config_dict: dict, strict: bool = False) -> None:
"""
Populate configuration attributes from a dictionary.

Optionally, enforcing strict key checking.
"""
for k, v in config_dict.items():
if not hasattr(self, k):
_msg: str = f"loading extra '{k}'"
if strict:
raise ValueError(f"loading extra '{k}'")
_msg = f"loading extra '{k}'"
logger.info(_msg)
Info(_msg)
raise ValueError(_msg)
logger.warning(_msg)
Comment on lines +41 to +44
Copy link

Copilot AI Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed from logger.info() to logger.warning() but the logic is incorrect. When strict=False, the warning is logged, but when strict=True, an exception is raised and the warning is never logged. The original code using Info(_msg) after logging makes more sense for the non-strict case.

Copilot uses AI. Check for mistakes.
continue
if isinstance(self.__dict__[k], BaseConfig):
self.__dict__[k].from_dict(v)
else:
self.__dict__[k] = v

def as_dict_jsonable(self):
def as_dict_jsonable(self) -> dict:
"""Convert the configuration to a JSON-serializable dictionary."""
conf = {}
conf: dict = {}
for k, v in self.__dict__.items():
if isinstance(v, BaseConfig):
conf[k] = v.as_dict_jsonable()
Expand All @@ -61,10 +60,10 @@ def as_dict_jsonable(self):


@lru_cache
def jsonable(x: Any) -> bool:
def jsonable(x) -> bool:
"""Check if the object x is JSON serializable."""
try:
dumps(x)
return True
except TypeError:
return False
return True
Comment on lines 65 to +69
Copy link

Copilot AI Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unreachable code due to control flow change. The return True at line 69 will never execute if dumps(x) succeeds because the function would have already returned in the original code. The correct implementation should have return True inside the try block before any exception handling.

Copilot uses AI. Check for mistakes.
Comment on lines 62 to +69
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

jsonable breaks with @lru_cache on unhashable inputs and misses OverflowError.

  • lru_cache requires hashable args; passing dict/list/set will raise TypeError before dumps.
  • json.dumps may raise OverflowError in addition to TypeError.

Apply:

-@lru_cache
-def jsonable(x) -> bool:
+def jsonable(x: Any) -> bool:
     """Check if the object x is JSON serializable."""
     try:
         dumps(x)
-    except TypeError:
+    except (TypeError, OverflowError):
         return False
     return True

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/visualizr/anitalker/config_base.py around lines 62 to 69, the jsonable
function is decorated with @lru_cache which fails for unhashable inputs
(dict/list/set) and it only catches TypeError but json.dumps can also raise
OverflowError; remove the @lru_cache decorator so unhashable inputs are
accepted, and update the exception handling to catch both TypeError and
OverflowError from dumps and return False in those cases, otherwise return True.

Loading