Skip to content

Archive node is wasting effort inserting duplicated data into DB #18114

@glyh

Description

@glyh

When archive is in operation, it may insert duplicated data. While this is fine, because it doesn't know what state the DB is in, it's bad that it'll retry it even it should know the data exists.

Here's an example log line:

2025-11-14 07:39:13 UTC [Warn] Error when adding block data to the database, rolling back transaction: $error
  error: "Request to <postgres://postgres:_@postgres:5432/archive> failed: ERROR:  duplicate key value violates unique constraint \"snarked_ledger_hashes_pkey\"\nDETAIL:  Key (id)=(88912) already exists.\n. Query: \"INSERT INTO snarked_ledger_hashes (value) VALUES ($1) RETURNING id\"."
2025-11-14 07:39:13 UTC [Warn] Error in add_block_aux : $error. Retrying...
  error: "Request to <postgres://postgres:_@postgres:5432/archive> failed: ERROR:  duplicate key value violates unique constraint \"snarked_ledger_hashes_pkey\"\nDETAIL:  Key (id)=(88912) already exists.\n. Query: \"INSERT INTO snarked_ledger_hashes (value) VALUES ($1) RETURNING id\"."

We should ensure this scenario doesn't happen.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions