fix: wire up DashPay payment history to wallet transactions#721
fix: wire up DashPay payment history to wallet transactions#721thepastaclaw wants to merge 1 commit intodashpay:v1.0-devfrom
Conversation
|
@coderabbitai review |
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughAdds a retroactive DashPay payment scanner that scans wallet transactions per identity, persists discovered payments, integrates scanning into SPV wallet reconciliation and payment-history loading, and surfaces per-contact payment history in the UI with duff-to-Dash normalization. Changes
Sequence DiagramsequenceDiagram
participant SPV as SPV Reconciler
participant Wallet as Wallet State
participant Scanner as DashPay Scanner
participant DB as DashPay DB
participant UI as Contact UI
SPV->>Wallet: apply reconciled transactions (clone txs)
SPV->>Scanner: scan_wallet_transactions_for_dashpay_payments(identity, txs)
Scanner->>DB: load address mappings for identity
Scanner->>DB: load existing dashpay_payments for dedupe
Scanner->>Scanner: iterate transactions -> match outputs to addresses
Scanner->>DB: persist new payments (if any) / update receive indices
Scanner-->>SPV: return count / result
UI->>DB: load_payment_history_for_contact(identity, contact, limit)
DB-->>UI: return stored payments
UI->>UI: normalize amounts/timestamps, mark direction, render list
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
src/backend_task/dashpay.rs (1)
177-196: Clone the transaction snapshot before scanning.
wallet_guardis held for the entire retroactive scan, but the scan only needs a snapshot oftransactionsand then does DB-backed work. Cloning the vector and dropping the read guard first will reduce lock contention with SPV reconcile updating the same wallet.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend_task/dashpay.rs` around lines 177 - 196, The code holds wallet_guard (from wallet_arc.read()) while calling incoming_payments::scan_wallet_transactions_for_dashpay_payments, causing unnecessary lock contention; instead, take a snapshot by cloning wallet_guard.transactions into a new Vec, drop the read guard, then call scan_wallet_transactions_for_dashpay_payments with the cloned transactions (pass self and &identity_id as before). Ensure you still handle the Result (Ok/Err) the same way after moving the call to use the cloned transactions.src/context/wallet_lifecycle.rs (1)
786-817: Drop the wallet write lock before running the payment scan.This block keeps
wallet.write()alive while iterating identities and callingscan_wallet_transactions_for_dashpay_payments(), which does address/history lookups and inserts. That makes reconcile hold the wallet exclusively much longer than needed. Snapshot the identity IDs while the guard is held, then release it before scanning.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/context/wallet_lifecycle.rs` around lines 786 - 817, While holding the wallet write lock you iterate wallet.identities and call scan_wallet_transactions_for_dashpay_payments (which does DB lookups/inserts), so snapshot the identity IDs first and release the lock before scanning: after wallet.set_transactions(...) collect the identity IDs into a Vec (e.g. identities_ids: Vec<_> by iterating wallet.identities.values().map(|id| id.id().clone()).collect()), then drop the write guard / end the scope so the wallet lock is released, and only then loop over identities_ids calling crate::backend_task::dashpay::incoming_payments::scan_wallet_transactions_for_dashpay_payments(self, &identity_id, &wallet_transactions) and handle Ok/Err as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/backend_task/dashpay/incoming_payments.rs`:
- Around line 401-418: The loop currently treats any output matching address_map
(which is the set of addresses contacts use to pay us for identity_id) as either
received or sent based solely on wtx.is_incoming(), which misclassifies outputs
because those addresses are receive-side only; to fix, change the else branch so
you do not infer a "sent" payment from a receive-side match: only record a
payment when wtx.is_incoming() is true (i.e., a true incoming/received payment),
and for outgoing payments determine ownership by checking inputs/our-addresses
(inspect wtx.inputs or maintain a separate map of our spending addresses) or use
a separate contact->outgoing address map before setting (*identity_id,
*contact_id, "sent"); update the block around address_map.get,
wtx.is_incoming(), and the tuple assignment (from_id, to_id, payment_type) to
implement this logic so receive-address matches are only ever treated as
"received" or ignored for "sent" detection.
- Around line 445-451: The guard `&& *address_index > 0` prevents updating
`highest_receive_index` when the first incoming payment is at index 0; remove
that check so that when `wtx.is_incoming()` is true you always call
`app_context.db.update_highest_receive_index(identity_id, contact_id,
*address_index + 1)` (this will correctly advance the receive window from 0 →
1); update the code around `wtx.is_incoming()` / `address_index` and the call to
`app_context.db.update_highest_receive_index` accordingly, ensuring the +1
update is applied for index 0.
In `@src/ui/dashpay/contact_details.rs`:
- Around line 105-115: The code is incorrectly converting on-chain duffs into
Credits (Dash) when pushing into self.payment_history: locate where sp.amount is
used (sp.amount, dashpay_payments) and Payment construction in
contact_details.rs and stop wrapping the raw duff integer into Credits; instead
preserve the amount as the on-chain duff unit (e.g., store as u64 or a Duffs
type) or explicitly convert to a Dash-specific unit type at this layer (not
Credits), and leave formatting/labeling (Dash vs duffs) to the UI render path so
the payment_history and Payment entries reflect the correct stored unit.
- Around line 95-103: The current call to load_payment_history(&identity_id,
100) applies a 100-row limit across all counterparties so older payments with
this specific contact can be dropped; change the code to fetch payments filtered
by counterparty or remove the prefilter limit: either (A) add/use a DB method
like load_payment_history_for_contact(identity_id, contact_bytes, limit) and
call that (compute contact_bytes from self.contact_id first), or (B) call
load_payment_history with no global 100 cap (or a much larger limit) and then
filter by self.contact_id as before so all payments involving this contact are
returned into payment_history. Ensure you update the DB API
(load_payment_history) and its callers accordingly so the per-contact history is
queried at the database level rather than relying on the post-filter.
---
Nitpick comments:
In `@src/backend_task/dashpay.rs`:
- Around line 177-196: The code holds wallet_guard (from wallet_arc.read())
while calling incoming_payments::scan_wallet_transactions_for_dashpay_payments,
causing unnecessary lock contention; instead, take a snapshot by cloning
wallet_guard.transactions into a new Vec, drop the read guard, then call
scan_wallet_transactions_for_dashpay_payments with the cloned transactions (pass
self and &identity_id as before). Ensure you still handle the Result (Ok/Err)
the same way after moving the call to use the cloned transactions.
In `@src/context/wallet_lifecycle.rs`:
- Around line 786-817: While holding the wallet write lock you iterate
wallet.identities and call scan_wallet_transactions_for_dashpay_payments (which
does DB lookups/inserts), so snapshot the identity IDs first and release the
lock before scanning: after wallet.set_transactions(...) collect the identity
IDs into a Vec (e.g. identities_ids: Vec<_> by iterating
wallet.identities.values().map(|id| id.id().clone()).collect()), then drop the
write guard / end the scope so the wallet lock is released, and only then loop
over identities_ids calling
crate::backend_task::dashpay::incoming_payments::scan_wallet_transactions_for_dashpay_payments(self,
&identity_id, &wallet_transactions) and handle Ok/Err as before.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 6d2b6762-9997-4ec3-81f5-3ca7dbd1f64e
📒 Files selected for processing (4)
src/backend_task/dashpay.rssrc/backend_task/dashpay/incoming_payments.rssrc/context/wallet_lifecycle.rssrc/ui/dashpay/contact_details.rs
d3050eb to
6a85b60
Compare
|
Addressed all 6 CodeRabbit findings:
All squashed into one commit. @coderabbitai review |
|
Triggering a fresh review now. ✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/backend_task/dashpay/incoming_payments.rs (1)
385-390: Consider the deduplication limit for high-volume identities.The hardcoded
10_000limit for loading existing payments means that for identities with more than 10,000 historical payments, older tx_ids won't be inknown_tx_ids, potentially causing duplicate insert attempts. Thetx_id UNIQUEconstraint in the database provides server-side protection, but this could cause spurious warning logs on constraint violations.This is likely acceptable for the vast majority of users, but consider documenting this assumption or increasing the limit for future-proofing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend_task/dashpay/incoming_payments.rs` around lines 385 - 390, The hardcoded 10_000 limit passed to app_context.db.load_payment_history can miss older tx_ids for high-volume identities and cause duplicate-insert constraint warnings; update incoming_payments.rs to replace the magic number with a configurable constant or config-driven value (e.g., MAX_PAYMENT_HISTORY_TO_LOAD) and use that when calling load_payment_history, or increase the value and add a comment documenting the tradeoff; reference the existing_payments, known_tx_ids, and load_payment_history identifiers when making the change so the deduplication uses the new configurable/clearly-named limit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/backend_task/dashpay.rs`:
- Around line 179-185: The current payment discovery only reads the first wallet
via identity.associated_wallets.values().next(), so change the logic to iterate
all entries in identity.associated_wallets, attempt to read each wallet Arc
(wallet_arc.read().ok()), collect/merge each guard.transactions into a single
wallet_txs collection (handling read failures by skipping that wallet), and
optionally deduplicate transactions; update any use of the wallet_txs variable
accordingly (symbols: identity.associated_wallets, wallet_arc.read(),
guard.transactions, wallet_txs).
---
Nitpick comments:
In `@src/backend_task/dashpay/incoming_payments.rs`:
- Around line 385-390: The hardcoded 10_000 limit passed to
app_context.db.load_payment_history can miss older tx_ids for high-volume
identities and cause duplicate-insert constraint warnings; update
incoming_payments.rs to replace the magic number with a configurable constant or
config-driven value (e.g., MAX_PAYMENT_HISTORY_TO_LOAD) and use that when
calling load_payment_history, or increase the value and add a comment
documenting the tradeoff; reference the existing_payments, known_tx_ids, and
load_payment_history identifiers when making the change so the deduplication
uses the new configurable/clearly-named limit.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c6c099e9-819d-4266-b081-cda73f337056
📒 Files selected for processing (5)
src/backend_task/dashpay.rssrc/backend_task/dashpay/incoming_payments.rssrc/context/wallet_lifecycle.rssrc/database/dashpay.rssrc/ui/dashpay/contact_details.rs
Address second CodeRabbit review on PR dashpay#721: 1. Iterate all associated_wallets instead of only taking .next() An identity can have multiple wallet entries; the prior code silently skipped any beyond the first. 2. Document the 10,000 dedup limit in incoming_payments.rs The DB's tx_id UNIQUE constraint provides server-side protection for identities exceeding this client-side dedup window.
6a85b60 to
5bb2ccb
Compare
|
Second round of CodeRabbit feedback addressed:
Squashed into one commit. @coderabbitai review |
|
Triggering a fresh review now. ✅ Actions performedReview triggered.
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Sound implementation that solves a real user-facing problem (empty Payment History screen). Retroactive scan design is correct and idempotent with dual-layer dedup (client-side HashSet + DB UNIQUE constraint). Two suggestions for code hygiene.
Reviewed commit: 5bb2ccb
🟡 2 suggestion(s)
1 additional finding
🟡 suggestion: Plain INSERT causes noisy UNIQUE-constraint warnings for large histories
src/database/dashpay.rs (lines 530-534)
The scan loads 10,000 recent payments for client-side dedup, but beyond that it relies on the tx_id UNIQUE constraint. Because save_payment uses a plain INSERT (not INSERT OR IGNORE), every duplicate beyond the 10k window produces a rusqlite::Error logged as a warning. Consider using INSERT OR IGNORE for the scan path.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/dashpay/incoming_payments.rs`:
- [SUGGESTION] lines 361-473: Code duplication between process_incoming_payment and scan function
Both functions perform the same core operations: match address → save_payment() → update_highest_receive_index(). The scan function re-implements this inline. Extract a shared helper like `record_incoming_payment(app_context, tx_id, contact_id, identity_id, amount_duffs, address_index)` to reduce divergence risk.
In `src/database/dashpay.rs`:
- [SUGGESTION] lines 530-534: Plain INSERT causes noisy UNIQUE-constraint warnings for large histories
The scan loads 10,000 recent payments for client-side dedup, but beyond that it relies on the tx_id UNIQUE constraint. Because save_payment uses a plain INSERT (not INSERT OR IGNORE), every duplicate beyond the 10k window produces a rusqlite::Error logged as a warning. Consider using INSERT OR IGNORE for the scan path.
| pub fn scan_wallet_transactions_for_dashpay_payments( | ||
| app_context: &AppContext, | ||
| identity_id: &Identifier, | ||
| transactions: &[crate::model::wallet::WalletTransaction], | ||
| ) -> Result<usize, String> { | ||
| use dash_sdk::dpp::dashcore::Address; | ||
| use std::collections::HashSet; | ||
|
|
||
| // Build a lookup set of addresses → (owner_id, contact_id, address_index) | ||
| let mappings = app_context | ||
| .db | ||
| .get_all_dashpay_address_mappings(identity_id) | ||
| .map_err(|e| format!("Failed to load DashPay address mappings: {}", e))?; | ||
|
|
||
| if mappings.is_empty() { | ||
| return Ok(0); | ||
| } | ||
|
|
||
| // address string → (contact_id, address_index) | ||
| let address_map: std::collections::HashMap<String, (Identifier, u32)> = mappings | ||
| .into_iter() | ||
| .map(|(addr_str, contact_id, idx)| (addr_str, (contact_id, idx))) | ||
| .collect(); | ||
|
|
||
| // Collect already-known tx_ids to avoid duplicates. | ||
| // We load up to 10,000 recent payments for client-side dedup; for identities | ||
| // with more history the DB's `tx_id UNIQUE` constraint provides server-side | ||
| // protection (at the cost of a benign constraint-violation log on collisions). | ||
| let existing_payments = app_context | ||
| .db | ||
| .load_payment_history(identity_id, 10_000) | ||
| .unwrap_or_default(); | ||
| let known_tx_ids: HashSet<String> = existing_payments.iter().map(|p| p.tx_id.clone()).collect(); | ||
|
|
||
| let network = app_context.network; | ||
| let mut saved = 0usize; | ||
|
|
||
| for wtx in transactions { | ||
| let txid_str = wtx.txid.to_string(); | ||
| if known_tx_ids.contains(&txid_str) { | ||
| continue; | ||
| } | ||
|
|
||
| // Check every output for a matching DashPay address | ||
| for output in &wtx.transaction.output { | ||
| let addr = match Address::from_script(&output.script_pubkey, network) { | ||
| Ok(a) => a, | ||
| Err(_) => continue, | ||
| }; | ||
|
|
||
| let addr_str = addr.to_string(); | ||
| if let Some((contact_id, address_index)) = address_map.get(&addr_str) { | ||
| // The address_map contains receive-side addresses (addresses contacts | ||
| // use to pay us). For outgoing transactions, matching an output to | ||
| // these addresses does NOT reliably mean we sent to the contact — | ||
| // skip outgoing txs in the retroactive scan. | ||
| if !wtx.is_incoming() { | ||
| continue; | ||
| } | ||
|
|
||
| let amount_duffs = output.value; | ||
| let (from_id, to_id, payment_type) = (*contact_id, *identity_id, "received"); | ||
|
|
||
| if let Err(e) = app_context.db.save_payment( | ||
| &txid_str, | ||
| &from_id, | ||
| &to_id, | ||
| amount_duffs as i64, | ||
| None, | ||
| payment_type, | ||
| ) { | ||
| tracing::warn!( | ||
| tx_id = %txid_str, | ||
| error = %e, | ||
| "Failed to save scanned DashPay payment" | ||
| ); | ||
| } else { | ||
| saved += 1; | ||
| tracing::info!( | ||
| tx_id = %txid_str, | ||
| contact = %contact_id.to_string(Encoding::Base58), | ||
| amount = amount_duffs, | ||
| direction = payment_type, | ||
| address_index = address_index, | ||
| "Saved DashPay payment from wallet transaction scan" | ||
| ); | ||
| } | ||
|
|
||
| // Update highest receive index | ||
| { | ||
| let _ = app_context.db.update_highest_receive_index( | ||
| identity_id, | ||
| contact_id, | ||
| *address_index + 1, | ||
| ); | ||
| } | ||
|
|
||
| // One match per transaction is enough (avoid double-counting) | ||
| break; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if saved > 0 { | ||
| tracing::info!( | ||
| identity = %identity_id.to_string(Encoding::Base58), | ||
| new_payments = saved, | ||
| "DashPay wallet transaction scan complete" | ||
| ); | ||
| } | ||
|
|
||
| Ok(saved) | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: Code duplication between process_incoming_payment and scan function
Both functions perform the same core operations: match address → save_payment() → update_highest_receive_index(). The scan function re-implements this inline. Extract a shared helper like record_incoming_payment(app_context, tx_id, contact_id, identity_id, amount_duffs, address_index) to reduce divergence risk.
source: ['claude-general']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/dashpay/incoming_payments.rs`:
- [SUGGESTION] lines 361-473: Code duplication between process_incoming_payment and scan function
Both functions perform the same core operations: match address → save_payment() → update_highest_receive_index(). The scan function re-implements this inline. Extract a shared helper like `record_incoming_payment(app_context, tx_id, contact_id, identity_id, amount_duffs, address_index)` to reduce divergence risk.
There was a problem hiding this comment.
Fixed in 2b0689c. Extracted record_incoming_payment() helper shared by both process_incoming_payment and scan_wallet_transactions_for_dashpay_payments. Also changed save_payment() to use INSERT OR IGNORE so the UNIQUE constraint on tx_id is handled silently instead of producing noisy warnings.
5bb2ccb to
2b0689c
Compare
|
✅ Review complete (commit 2cdfea4) |
2b0689c to
0b77be8
Compare
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
One convergent blocking bug: retroactively scanned payments are saved without a timestamp, so the DB default (unixepoch()) stamps historical transactions with the scan time — collapsing the payment history UI to a single 'now' cluster. Codex also flags a valid test-coverage gap in the new scan path. Two findings total; both kept.
Reviewed commit: 0b77be8
🔴 1 blocking | 🟡 1 suggestion(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/dashpay/incoming_payments.rs`:
- [BLOCKING] lines 291-319: Retroactively scanned payments are stored with the scan time, not the block time
`record_incoming_payment` calls `Database::save_payment` without a timestamp (src/backend_task/dashpay/incoming_payments.rs:299-309), and `dashpay_payments.created_at` is declared as `INTEGER DEFAULT (unixepoch())` (src/database/dashpay.rs:162). For the real-time SPV path this was fine, but `scan_wallet_transactions_for_dashpay_payments` (lines 381-472) iterates `WalletTransaction`s whose `timestamp: u64` field (src/model/wallet/mod.rs:583) already carries the real block time — and discards it at the save boundary. Both payment UIs surface this field as the payment time: `ContactDetailsScreen::load_from_database` reads `sp.created_at` straight into `Payment::timestamp` (src/ui/dashpay/contact_details.rs:108-118), and `load_payment_history` does the same. So the first time a user opens Payment History after this PR lands, every retroactively-imported payment — including months-old ones — will appear as having happened at the scan moment, and they will all sort into a single timestamp cluster. Fix: extend `save_payment` to accept an optional `created_at` (writing `COALESCE(?, unixepoch())`), and thread `wtx.timestamp` from the scan call site at lines 443-450 through `record_incoming_payment`. Leave the SPV real-time path passing `None` (or the current block time) so its behavior is unchanged.
- [SUGGESTION] lines 381-472: New wallet-transaction backfill path has no targeted tests
`scan_wallet_transactions_for_dashpay_payments` adds non-trivial logic — address-map lookup, duplicate suppression via `known_tx_ids`, incoming-only filtering (lines 437-439), single-match-per-tx break (line 469), and highest-receive-index updates — and is now invoked from both the `LoadPaymentHistory` backend task and the SPV reconcile flow. The only test in this module is still `test_hash_identifier_to_u32`. A regression in the filter/dedup logic would silently produce missing or duplicated payment history without failing CI. Add inline `#[test]`s (or a test in `tests/` if a fixture wallet is easier) covering at minimum: outgoing transactions are skipped, already-saved tx_ids are skipped, only the first matching output per tx is recorded, and the highest receive index is advanced correctly.
| fn record_incoming_payment( | ||
| app_context: &AppContext, | ||
| tx_id: &str, | ||
| owner_id: &Identifier, | ||
| contact_id: &Identifier, | ||
| amount_duffs: u64, | ||
| address_index: u32, | ||
| ) -> Result<(), String> { | ||
| app_context | ||
| .db | ||
| .save_payment( | ||
| tx_id, | ||
| contact_id, // from contact | ||
| owner_id, // to us | ||
| amount_duffs as i64, | ||
| None, // memo — not available for incoming | ||
| "received", | ||
| ) | ||
| .map_err(|e| format!("Failed to save payment: {}", e))?; | ||
|
|
||
| // update_highest_receive_index uses MAX() internally, so calling | ||
| // unconditionally is safe and idempotent. | ||
| app_context | ||
| .db | ||
| .update_highest_receive_index(owner_id, contact_id, address_index + 1) | ||
| .map_err(|e| format!("Failed to update receive index: {}", e))?; | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
🔴 Blocking: Retroactively scanned payments are stored with the scan time, not the block time
record_incoming_payment calls Database::save_payment without a timestamp (src/backend_task/dashpay/incoming_payments.rs:299-309), and dashpay_payments.created_at is declared as INTEGER DEFAULT (unixepoch()) (src/database/dashpay.rs:162). For the real-time SPV path this was fine, but scan_wallet_transactions_for_dashpay_payments (lines 381-472) iterates WalletTransactions whose timestamp: u64 field (src/model/wallet/mod.rs:583) already carries the real block time — and discards it at the save boundary. Both payment UIs surface this field as the payment time: ContactDetailsScreen::load_from_database reads sp.created_at straight into Payment::timestamp (src/ui/dashpay/contact_details.rs:108-118), and load_payment_history does the same. So the first time a user opens Payment History after this PR lands, every retroactively-imported payment — including months-old ones — will appear as having happened at the scan moment, and they will all sort into a single timestamp cluster. Fix: extend save_payment to accept an optional created_at (writing COALESCE(?, unixepoch())), and thread wtx.timestamp from the scan call site at lines 443-450 through record_incoming_payment. Leave the SPV real-time path passing None (or the current block time) so its behavior is unchanged.
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/dashpay/incoming_payments.rs`:
- [BLOCKING] lines 291-319: Retroactively scanned payments are stored with the scan time, not the block time
`record_incoming_payment` calls `Database::save_payment` without a timestamp (src/backend_task/dashpay/incoming_payments.rs:299-309), and `dashpay_payments.created_at` is declared as `INTEGER DEFAULT (unixepoch())` (src/database/dashpay.rs:162). For the real-time SPV path this was fine, but `scan_wallet_transactions_for_dashpay_payments` (lines 381-472) iterates `WalletTransaction`s whose `timestamp: u64` field (src/model/wallet/mod.rs:583) already carries the real block time — and discards it at the save boundary. Both payment UIs surface this field as the payment time: `ContactDetailsScreen::load_from_database` reads `sp.created_at` straight into `Payment::timestamp` (src/ui/dashpay/contact_details.rs:108-118), and `load_payment_history` does the same. So the first time a user opens Payment History after this PR lands, every retroactively-imported payment — including months-old ones — will appear as having happened at the scan moment, and they will all sort into a single timestamp cluster. Fix: extend `save_payment` to accept an optional `created_at` (writing `COALESCE(?, unixepoch())`), and thread `wtx.timestamp` from the scan call site at lines 443-450 through `record_incoming_payment`. Leave the SPV real-time path passing `None` (or the current block time) so its behavior is unchanged.
| pub fn scan_wallet_transactions_for_dashpay_payments( | ||
| app_context: &AppContext, | ||
| identity_id: &Identifier, | ||
| transactions: &[crate::model::wallet::WalletTransaction], | ||
| ) -> Result<usize, String> { | ||
| use dash_sdk::dpp::dashcore::Address; | ||
| use std::collections::HashSet; | ||
|
|
||
| // Build a lookup set of addresses → (owner_id, contact_id, address_index) | ||
| let mappings = app_context | ||
| .db | ||
| .get_all_dashpay_address_mappings(identity_id) | ||
| .map_err(|e| format!("Failed to load DashPay address mappings: {}", e))?; | ||
|
|
||
| if mappings.is_empty() { | ||
| return Ok(0); | ||
| } | ||
|
|
||
| // address string → (contact_id, address_index) | ||
| let address_map: std::collections::HashMap<String, (Identifier, u32)> = mappings | ||
| .into_iter() | ||
| .map(|(addr_str, contact_id, idx)| (addr_str, (contact_id, idx))) | ||
| .collect(); | ||
|
|
||
| // Collect already-known tx_ids to avoid duplicates. | ||
| // We load up to 10,000 recent payments for client-side dedup; for identities | ||
| // with more history the DB's `tx_id UNIQUE` constraint plus INSERT OR IGNORE | ||
| // still prevents duplicate rows server-side. | ||
| let existing_payments = app_context | ||
| .db | ||
| .load_payment_history(identity_id, 10_000) | ||
| .unwrap_or_default(); | ||
| let known_tx_ids: HashSet<String> = existing_payments.iter().map(|p| p.tx_id.clone()).collect(); | ||
|
|
||
| let network = app_context.network; | ||
| let mut saved = 0usize; | ||
|
|
||
| for wtx in transactions { | ||
| let txid_str = wtx.txid.to_string(); | ||
| if known_tx_ids.contains(&txid_str) { | ||
| continue; | ||
| } | ||
|
|
||
| // Check every output for a matching DashPay address | ||
| for output in &wtx.transaction.output { | ||
| let addr = match Address::from_script(&output.script_pubkey, network) { | ||
| Ok(a) => a, | ||
| Err(_) => continue, | ||
| }; | ||
|
|
||
| let addr_str = addr.to_string(); | ||
| if let Some((contact_id, address_index)) = address_map.get(&addr_str) { | ||
| // The address_map contains receive-side addresses (addresses contacts | ||
| // use to pay us). For outgoing transactions, matching an output to | ||
| // these addresses does NOT reliably mean we sent to the contact — | ||
| // skip outgoing txs in the retroactive scan. | ||
| if !wtx.is_incoming() { | ||
| continue; | ||
| } | ||
|
|
||
| let amount_duffs = output.value; | ||
|
|
||
| if let Err(e) = record_incoming_payment( | ||
| app_context, | ||
| &txid_str, | ||
| identity_id, | ||
| contact_id, | ||
| amount_duffs, | ||
| *address_index, | ||
| ) { | ||
| tracing::warn!( | ||
| tx_id = %txid_str, | ||
| error = %e, | ||
| "Failed to save scanned DashPay payment" | ||
| ); | ||
| } else { | ||
| saved += 1; | ||
| tracing::info!( | ||
| tx_id = %txid_str, | ||
| contact = %contact_id.to_string(Encoding::Base58), | ||
| amount = amount_duffs, | ||
| direction = "received", | ||
| address_index = address_index, | ||
| "Saved DashPay payment from wallet transaction scan" | ||
| ); | ||
| } | ||
|
|
||
| // One match per transaction is enough (avoid double-counting) | ||
| break; | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: New wallet-transaction backfill path has no targeted tests
scan_wallet_transactions_for_dashpay_payments adds non-trivial logic — address-map lookup, duplicate suppression via known_tx_ids, incoming-only filtering (lines 437-439), single-match-per-tx break (line 469), and highest-receive-index updates — and is now invoked from both the LoadPaymentHistory backend task and the SPV reconcile flow. The only test in this module is still test_hash_identifier_to_u32. A regression in the filter/dedup logic would silently produce missing or duplicated payment history without failing CI. Add inline #[test]s (or a test in tests/ if a fixture wallet is easier) covering at minimum: outgoing transactions are skipped, already-saved tx_ids are skipped, only the first matching output per tx is recorded, and the highest receive index is advanced correctly.
source: ['codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/dashpay/incoming_payments.rs`:
- [SUGGESTION] lines 381-472: New wallet-transaction backfill path has no targeted tests
`scan_wallet_transactions_for_dashpay_payments` adds non-trivial logic — address-map lookup, duplicate suppression via `known_tx_ids`, incoming-only filtering (lines 437-439), single-match-per-tx break (line 469), and highest-receive-index updates — and is now invoked from both the `LoadPaymentHistory` backend task and the SPV reconcile flow. The only test in this module is still `test_hash_identifier_to_u32`. A regression in the filter/dedup logic would silently produce missing or duplicated payment history without failing CI. Add inline `#[test]`s (or a test in `tests/` if a fixture wallet is easier) covering at minimum: outgoing transactions are skipped, already-saved tx_ids are skipped, only the first matching output per tx is recorded, and the highest receive index is advanced correctly.
The Payment History screen was always empty because it only read from the dashpay_payments database table, which was only populated when payments were sent through the app's DashPay UI. The process_incoming_payment() function existed but was never called. Changes: - Add scan_wallet_transactions_for_dashpay_payments() that cross- references SPV wallet transactions against DashPay address mappings and saves matches to the dashpay_payments table (idempotent) - Call it from LoadPaymentHistory backend task for immediate retroactive detection when the user opens the Payment History screen - Call it during SPV reconcile for ongoing automatic detection - Fix ContactDetailsScreen to load per-contact payment history from DB (was initialized empty and never populated) Closes dashpay#688
0b77be8 to
2cdfea4
Compare
|
Addressed the latest review findings on
Verification run on the updated commit: |
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
The PR is described as a focused fix for DashPay payment history wiring, but the actual merge diff against v1.0-dev is a massive feature branch: 179 files changed with 34k+ insertions across runtime code, database/schema work, SPV, shielded flows, MCP tooling, CLI, documentation, and backend E2E infrastructure. That mismatch is the core review problem here — this is not reviewable or merge-safe as a narrowly scoped payment-history bugfix PR.
Reviewed commit: 2cdfea4
🔴 1 blocking
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/dashpay/incoming_payments.rs`:
- [BLOCKING] lines 1-404: This payment-history fix PR is not actually a focused bugfix — it merges a huge product branch into `v1.0-dev`
The PR description says this change wires DashPay payment history to wallet transactions by adding retroactive wallet scans and loading contact payment history from the database. But the actual merge diff against `v1.0-dev` is 179 files with 34k+ insertions touching `src/app.rs`, `src/backend_task/error.rs`, `src/spv/manager.rs`, multiple database modules, MCP tooling, shielded-wallet flows, CLI binaries, release workflows, docs, and a large backend-E2E test framework. That means approving/merging this PR would not just land the payment-history fix — it would also merge a very large body of unrelated product changes that are not disclosed by the PR summary and cannot be meaningfully audited as part of a focused bugfix review. Split this into (a) a true payment-history fix PR containing only the DashPay/wallet-history changes, and (b) separate feature/integration PRs for the broader application changes. As-is, the PR description materially understates what will actually merge.
Issue
Fixes #688
Description
The Payment History screen shows nothing because payments were only recorded when sent through the app UI. Wallet transactions from SPV sync were never cross-referenced with DashPay contact addresses, and
ContactDetailsScreennever loaded payment history from the database.Root Cause
dashpay_paymentstable was only populated by payments initiated in-appprocess_incoming_payment()existed but was never called for SPV-synced transactionsContactDetailsScreenhad an emptypayment_historyvector that was never populatedFix
New function
scan_wallet_transactions_for_dashpay_payments()inincoming_payments.rs:tx_id UNIQUEconstraint + client-side dedup)WalletTransaction::is_incoming()Integration points:
LoadPaymentHistorybackend task for immediate retroactive detection when opening Payment Historywallet_lifecycle.rs) for ongoing automatic detection after wallet syncUI fix:
ContactDetailsScreen::load_contact_info()now loads and filters payment history from the database for the specific contact.Testing
cargo check✅cargo clippy -- -D warnings✅ (clean)Summary by CodeRabbit
New Features
Database