- Fixed pricing on recovery (You know what you are paying - no nasty surprises).
- Quick recovery turnaround at no extra cost. (Our average recovery time is 2 days).
- Memory card chip reading services (1st in the UK to offer this service).
- Raid recoding service (Specialist service for our business customers who have suffered a failed server rebuild).
- Our offices are 100% UK based and we never outsource any recovery work.
- Strict Non-disclosure privacy and security is 100% guaranteed.
Dell PowerEdge | 7-Disk RAID 5 (Hot Spare) | Multiple Failed Rebuilds

Incident (client narrative, condensed):
A 7-disk RAID 5 with a hot spare degraded over a weekend. The controller initiated an automatic rebuild to the spare and appeared to complete. On reboot the array flagged another member as failed. Two subsequent administrator-initiated rebuilds (after drive swaps) failed at 78% and 6% respectively. The server would not boot. The array hosted a file share plus Microsoft Exchange and a CRM SQL Server instance.
Assessment
-
Likely latent media defects and/or marginal heads on at least two members, compounded by rebuild stress and write-hole/parity drift introduced during repeated reshape/rebuild attempts.
-
High risk of controller metadata divergence across members (post-failure writes, inconsistent event journals).
-
Required actions: freeze topology, image all original members, reverse-engineer the true, last-consistent geometry (order, stripe size, parity rotation, start offsets), then rebuild virtually from images only before repairing file systems and application stores (EDB/MDF).
Work Performed
-
Forensic intake & preservation
-
Catalogued all 7 original members and 2 replacement disks provided.
-
Logged controller NVRAM, slot map, and any foreign/learned configs.
-
Powered no suspect disk repeatedly; moved immediately to read-only acquisition.
-
-
Disk triage & imaging
-
Per-member SMART/telemetry review identified two mechanically compromised drives (intermittent head stack).
-
Performed mechanical remediation on those members (donor component matching by model, micro-jog, head map; alignment and adaptive calibration), enabling stable read-back.
-
Imaged all seven original members using adaptive head-select cloning with ECC-aware retries and progressive block reduction. Achieved 100% logical coverage on the two previously failing units post-mechanical work; the remaining members imaged with sparse remapped regions documented in defect maps.
-
-
Virtual array reconstruction
-
Parsed on-disk metadata; derived stripe size, parity rotation (left-symmetric), member order, and offsets.
-
Identified post-incident divergence caused by successive rebuild attempts. Constructed a version map to select the last coherent timeline across members.
-
Executed virtual parity reconciliation to close write-hole regions; no writes to originals (images only).
-
Result: consistent block-device image of the logical volume.
-
-
File system & application recovery
-
Verified and repaired NTFS volumes (MFT/mirror, log replay).
-
Exchange: mounted recovered EDB stores; performed soft-repair and logical consistency checks; exported mailboxes to PST per client brief.
-
SQL Server: validated MDF/NDF/LDF; rebuilt transaction logs where required; attached databases in a lab SQL instance; scripted out verification queries; exported BAKs.
-
Recovered file share hierarchy with ACLs where intact.
-
Outcome
-
Full data recovery. Turnaround: images within 8 hours, virtual RAID reconstruction and data extraction within 35–48 hours (critical-priority workflow).
-
Deliverables: validated Exchange and SQL datasets + file share, accompanied by SHA-256 manifest and recovery report.
We always operate from forensic images, never from the original drives, to preserve evidence and enable repeatable workflows.


