NTFS drive: questions during DO-IT-YOURSELF recovery attempt

admin 28 May , 2022 0 Comments data recovery

I recently decided to study read more about data recovery after a 2TB external HDD I bought regarding my parents started failing. That will hard drive was supposed to be the neat centralized backup for the dozen smaller hard drives that they owned, some of which are 15 years old. It’s ironic that the shiny new hard drive was your one that failed, not the old ones.

I think there is technically no data loss since I tried a few of the older smaller drives plus they still mounted. But it would’ve been a pain to re-combine all those drives again, in addition I did some manual de-duplication work after I put all of the files into one drive. That was why I decided to do a DIY recovery as an understanding opportunity that has potential to save some of my time.

I ended up having these questions during the process that I couldn’t figure out even with online investigation, and would love to hear what the pros know about them:

  1. I used ddrescue to clone data, and though I recovered 99. 85% of the drive I noticed that bad areas often come in clumps associated with several MB long. I had fashioned to skip the scraping pass since it was not fast enough, so I couldn’t verify that most those regions are totally bad sectors, but I actually tried scraping a few of individuals regions and the ones I experienced turned out to be 100% bad industries. Is this normal? I had second . 7GB tied up in individuals sections so if they were almost all bad that would imply my drive had 5. 4 million bad sectors, whereas SMART only mentioned second . 4k pending sector add up. Could it be an artifact associated with using USB for my bad drive? My get is WD Elements and so i don’t think it has a SATA interface, I don’t have any SATA ports or adapters possibly. But the impression I’m getting is that USB is only bad because it reads bad areas very slowly / occasionally forces you to power routine on errors, not the fact that number of bad sectors would certainly change.

  2. The drive had been formatted NTFS, and some from the bad sectors landed in the MFT. I backed up the very first 80GB of my clone and tried chkdsk to find out if it worked well enough having a ~99. 85% intact MFT, which it did not (it only found a subset of my files). I then restored those 80GB. Can chkdsk still have damaged the data, beyond making changes to the MFT? I didn’t have sufficient free space to fully clone my clone. I’m confident my 80GB included the MFT, and AFAIK all the chkdsk actions only action on metadata that lives there.

  3. I looked around for recovery software after. I knew from earlier experience that the frequently marketed ones usually don’t recuperate directory structure, which would suck for me, so I decided to use ZAR after reading this article. Only later do I find this subreddit, which recommends R-studio. Are both tools good for my scenario, where my MFT is almost but not quite intact? So i am pretty sure chkdsk gave bad results because it thought our MFT was smaller compared to it really was, maybe it tripped on a hole within the data coming from a run of bad sectors. ZAR’s style philosophy seemed designed for file system repair rather than file recovery + it seemed more advanced than chkdsk so I thought it wouldn’t vacation on holes like that. This spreadsheet also claimed that it was good for corrupted NTFS, We wonder if my case matches the “severely corrupted NTFS” description.

Thanks!

Written By admin