You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Looking at how people need to use subclassing or monkeypatching for some straightforward cases of media pipeline storage, such as configuring the target Google Storage Cloud project ID or configuring a new storage class for a different service, I think we need to look into making media pipeline storage configuration more flexible.
Specifically, I think we need to consider:
Expose FilesPipeline‘s STORE_SCHEMES as a setting (with a better name) that works also for images, rather than requiring subclassing to define new storage classes.
Make storage classes Scrapy components, that can define from_crawler or from_settings to configured themselves based on settings, instead of requiring users to subclass base media pipeline classes to set class-level values based on settings, which feels like monkeypatching.
The text was updated successfully, but these errors were encountered:
Looking at how people need to use subclassing or monkeypatching for some straightforward cases of media pipeline storage, such as configuring the target Google Storage Cloud project ID or configuring a new storage class for a different service, I think we need to look into making media pipeline storage configuration more flexible.
Specifically, I think we need to consider:
from_crawler
orfrom_settings
to configured themselves based on settings, instead of requiring users to subclass base media pipeline classes to set class-level values based on settings, which feels like monkeypatching.The text was updated successfully, but these errors were encountered: