You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to HFS documentation the hfs_thread structure is of variable size. But in tsk_hfs.h it is defined as fixed (member of hfs_thread structure which is hfs_uni_str substructure is defined as fixed size of 512 bytes).
So in my opinion test in hfs_dent.c line 261 is wong.
`
else if (rec_type == HFS_FOLDER_THREAD) {
if ((nodesize < sizeof(hfs_thread)) || (rec_off2 > nodesize - sizeof(hfs_thread))) {
`
There are some records which fails on it, because they are smaller than sizeof(hfs_thread), but they are built correctly according to file system spec, and they are in my opinion valid.
So I think hfs_thread size calculation should be as follows :
According to HFS documentation the hfs_thread structure is of variable size. But in tsk_hfs.h it is defined as fixed (member of hfs_thread structure which is hfs_uni_str substructure is defined as fixed size of 512 bytes).
So in my opinion test in hfs_dent.c line 261 is wong.
`
else if (rec_type == HFS_FOLDER_THREAD) {
if ((nodesize < sizeof(hfs_thread)) || (rec_off2 > nodesize - sizeof(hfs_thread))) {
`
There are some records which fails on it, because they are smaller than sizeof(hfs_thread), but they are built correctly according to file system spec, and they are in my opinion valid.
So I think hfs_thread size calculation should be as follows :
hfs_thread* thread = (hfs_thread*)&rec_buf[rec_off2];
const int32_t thread_size = 0x50 + tsk_getu16(hfs->fs_info.endian, thread->name.length);
where:
0x50 is size of constant part of the structure which is size of hfs_thread header +
length of the name field "thread->name.length".
So summarizing we should calculate thread size as : size of constant part + size of variable part.
Unfortunately I cannot provide samples. They do not belong to me and are confidential
Kind regards
Bogdan
The text was updated successfully, but these errors were encountered: