The same is true for HDDs or for any other data storage devices.
I have stored data for more than 5 years on HDDs, but fortunately I have been careful to make a duplicate for all HDDs and I did not trust the error-correction codes used by the HDDs, so all files were stored with hashes of the content, for error detection.
On the HDDs on which data had been stored for many years, only seldom I did not see any error. Nevertheless, I have not lost any data, because the few errors never happened in the same place on both HDDs. The errors have been sometimes reported by the HDDs, but other times no errors were reported and nonetheless the files were corrupted, as detected by the content hashes and by the comparison with the corresponding good file that was on the other HDD.
I also make checksums for every file and verify twice a year. Over millions of files totalling 450TB I end up getting about 1 failed checksum every 2 years. If you are having more frequent checksum failures I would check for RAM errors first.
zfs and btrfs would do this automatically and have built in data scrub commands.
I have a checksum, uh, dream for lack of a better word, but I fear I lack the talent to pull it off. The problem with a checksum is that it only tells you that an error exists (hopefully), but it does not tell you where.
Imagine writing out your stream of bytes in a m x n grid. You could then make checksums for each 1 through m columns, and 1 through n rows. This results in an additional storage of (m + n) checksum bytes. A single error is localized as an intersection of the row and column with bad checksums. One could simply iterate through the other two hundred fifty-five possibilities and correct the issue. Two errors could give two situations. The most likely is four bad checksums (two columns, two rows) and you could again iterate. The less likely is three bad checksums because the two errors are in the same row or column.
I ran the math out for the data being rewritten as a volume and a hypervolume (four dimensions). I think the hypervolume was "too much" checksum, but the three-d version looked ... doable.
Someone smarter than I has probably already done this.
I have stored data for more than 5 years on HDDs, but fortunately I have been careful to make a duplicate for all HDDs and I did not trust the error-correction codes used by the HDDs, so all files were stored with hashes of the content, for error detection.
On the HDDs on which data had been stored for many years, only seldom I did not see any error. Nevertheless, I have not lost any data, because the few errors never happened in the same place on both HDDs. The errors have been sometimes reported by the HDDs, but other times no errors were reported and nonetheless the files were corrupted, as detected by the content hashes and by the comparison with the corresponding good file that was on the other HDD.