Rbd-638 Hot! May 2026

Next steps: gather higher‑verbosity logs, double‑check client capabilities on the remote pool, and test the scenario without a CephFS mount (direct `--dest-pool` flag).

We have reproduced RBD‑638 consistently on a test environment (Octopus 15.2.7). The failure occurs only when the destination image lives on a remote pool (accessed via CephFS or a separate cluster). `rbd info` works, but `rbd export‑diff` aborts with “No such file or directory”. rbd-638

Please let us know if additional information is needed. `rbd info` works, but `rbd export‑diff` aborts with

The most likely cause appears to be a path‑translation bug in the CLI when handling a remote pool spec. We have the following work‑arounds in place: • Copy the destination image locally first, then run export‑diff. • Use `rbd diff` + manual transfer. • Map the remote image as a block device and operate on `/dev/rbdX`. We have the following work‑arounds in place: •

– Alice (QA) Feel free to paste the table above directly into the bug, add any extra logs you have, and assign the appropriate owners. rbd export-diff blows up with ENOENT when the destination image is on a remote pool. The problem is reproducible, likely a CLI parsing / auth issue, and can be temporarily mitigated by copying the destination locally or using a block‑device mapping. Next steps focus on logs, capability checks, and upstream investigation. Happy debugging! 🚀