-
Notifications
You must be signed in to change notification settings - Fork 26
Question about --timelimit #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for your feedback, George. Just to make sure I understand the request correctly: You are suggesting that when a time limit of (for example) 60 seconds is exceeded in the write phase, then the read phase should still run and use again a timit limit of 60 seconds, correct? So far my assumption was that you could just run two separate commands, one for the write phase and one for the read phase, each with a time limit. That's why I did not add such an option in the past. However, it would be relatively easy to add such an option if it is helpful for you. |
Yes, this is correct about the example. You could name it different for full compatibility with the behavior until now, if it is an issue. You are right, about the two commands but makes it more easy to use one command. |
Hi George, ok, I will add this for you. It will take a few weeks because I'm currently trying to finalize the new S3 support for elbencho. Will let you know here when it's available. |
No worries, it is ok. |
@breuner I'm glad I looked here first! I was also wanting this, and was about to start work on it. For me, I did start with separate mkdirs, write, read, rm invocations, using --treescan in the read & rm phases, and that worked great. BUT, then I needed to support testing multiple paths, and ran afoul of --treescan's restrictions. I was thinking for simplicity, MKDIRS could keep the current --timelimit semantics so all subsequent phases could assume a complete directory tree. If the timelimit was hit making directories, the reads and writes were doomed anyway. Where are you in the execution of this change? If you haven't started, I could take a stab at it. I'll need to jump through a couple hoops with my company to contribute, but that won't be a blocker. |
Hi @dbishop , thanks a lot for bringing this one back to my attention. As you probably guessed correctly from my missing "we have it now" confirmation comment here back in the days, i unfortunately forgot about this one while i was working on the s3 support, so haven't implemented it yet. (big shame on me and big sorry to @gmarkomanolis for this!) if you would like to give it a shot then you're of course very welcome and i'll be happy to provide any level of support that you might need along the way, including jumping on a call to discuss and such. for starters you would probably want to go to ProgArgs.h (where all the command line options are defined with corresponding getters) and see from where the |
Hi, sorry I didn't update here earlier, but I ended up sticking with separate write/read invocations of elbencho, when targeting multiple filesystems simultaneously. I just added some code to calculate a number of files for the write phase that'd cause it to end up close to a targeted number of seconds, without any --timelimit. The read phase uses the same file count, but --infloop and --timelimit to ensure it runs for a known time. So I didn't make any progress on this feature request, and don't think I will, as I got myself unblocked another way. Thanks, |
Hi,
If I use in one execution -w and -r and the --timelimit, then the timelimit duration is for the total execution and not for each case (write and read). Could be considered such option if it does not exist already?
regards,
George
The text was updated successfully, but these errors were encountered: