You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Greetings, new to your tool, your friends at Vast and I have a long history at Yahoo, and now I'm at Intel.
What if I want to stress out a more complex MD testing strategy like I can with a commercial tool, like Virtana Workload Wisdom?
Where I can define
Number of dirs
Number of subdirs in each dir
Number depth of tree
Number of files per directory
And perform this across each of any # of exported POSIX filesystems available to a cluster of compute hosts...and/or define a # of threads to traverse the hashed tree randomly or ordered (north to south, west to east, etc)
I need to work elbencho against storage for AI workloads primarily, but there is a need for the same vendor solution(s) for general storage, including HFC HPC and EDA workloads.
I look forward to your thoughts.
-JeffM. (Tell AP and JD I said hello!)
The text was updated successfully, but these errors were encountered:
Greetings, new to your tool, your friends at Vast and I have a long history at Yahoo, and now I'm at Intel.
What if I want to stress out a more complex MD testing strategy like I can with a commercial tool, like Virtana Workload Wisdom?
Where I can define
Number of dirs
Number of subdirs in each dir
Number depth of tree
Number of files per directory
And perform this across each of any # of exported POSIX filesystems available to a cluster of compute hosts...and/or define a # of threads to traverse the hashed tree randomly or ordered (north to south, west to east, etc)
I need to work elbencho against storage for AI workloads primarily, but there is a need for the same vendor solution(s) for general storage, including HFC HPC and EDA workloads.
I look forward to your thoughts.
-JeffM. (Tell AP and JD I said hello!)
The text was updated successfully, but these errors were encountered: