New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expanding superblock deletes File #962
Comments
Hi @nobody19, thanks for creating an issue. I think you're right this is the same issue in #953 (ignoring the #959 red herring). If you can reproduce this efficiently that's really quite promising. The next goal is to try to reproduce this locally, to rule out hardware issues and make debugging tractable. So, sorry for the barrage of questions:
|
Hi @geky, Thanks a lot for your help. Yes it is more ore less the example from the README. Below I added the lfs structure and in the file should be all function which I'm using to run littleFS. If you have any input, just let me now. const struct lfs_config mylfs_cfg = {
.context = nullptr,
// block device operations
.read = user_provided_block_device_read,
.prog = user_provided_block_device_prog,
.erase = user_provided_block_device_erase,
.sync = user_provided_block_device_sync,
// block device configuration
.read_size = 32,
.prog_size = 256,
.block_size = 4096,
.block_count = 16384,
.block_cycles = 500,
.cache_size = 4096,
.lookahead_size = LFS__LOOKAHEAD_CNT,
.read_buffer = lfs_read_buffer,
.prog_buffer = lfs_prog_buffer,
.lookahead_buffer = (uint8_t*)lfs_lookahead_buffer,
.name_max = LFS_NAME_MAX,
.file_max = LFS_FILE_MAX,
.attr_max = LFS_ATTR_MAX
}; |
Hi @nobody19, thanks for the extra info. Unfortunately I wasn't able to reproduce this locally. Is it possible that the Also just a note: file.cfg = &fcfg;
int retOpen = lfs_file_open(&lfs, &file, "boot_count2", LFS_O_RDWR); The int retOpen = lfs_file_opencfg(&lfs, &file, "boot_count2", LFS_O_RDWR, &fcfg); I don't think this is the source of the problem though, littlefs will fall back to malloc. |
Hi @geky I would also expect some trouble in the Erase function, but it looks that this multiplication by block_size is correct. I checked there the code and also the datasheet. The datasheet is expecting the sector_erase as a 24/32 Bit adress. Thanks for the note. |
I found out why it first "crashes" around 500. It is depending on the block_cycle, which I have set to 500. But the "why" is not clear for me yet. // Number of erase cycles before littlefs evicts metadata logs and moves
// the metadata to another block. Suggested values are in the
// range 100-1000, with large values having better performance at the cost
// of less consistent wear distribution.
//
// Set to -1 to disable block-level wear-leveling.
int32_t block_cycles; |
Ah yeah, looks like your right. I had searched for BSP_OSPI_NOR_Erase_Block and found some code that omitted the multiplication, but it could have just been a coincidence.
Hmm, if the code is more stable with remounting, that suggests something is going wrong on the device/RAM side. Or the driver is falling out of sync with what actually exists on disk. Since the remount could be temporarily fixing whatever is going wrong.
This assert is failing because the file is not open. Mostly likely an error occurred during Though if the error is LFS_ERR_NOENT because the file disappeared that is a problem.
The superblock expansion, and mdir relocations, etc, are controlled by You could make it very small temporarily to speed up debugging. Unfortunately I haven't been able to reproduce what you're seeing even with |
Hi,
I have some problems with the superblock expansion. When I'm incrementing the bootcounter 500 times, then lfs is reporting "Expanding superblock at rev 1001" and after that the bootcounter file is not anymore available. Currently I'm incrementing the bootcounter in a for loop, to get the error quite fast.
I'm using lfs v2.9.1 with STM32U5 octaspi flash MX25UM51245G.
Maybe this is relevated to #953
Thanks a lot for any comment which can help me to solve the problem.
nobody
The text was updated successfully, but these errors were encountered: