-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Requirements and Design #11
Comments
The important parameters for the client are:
Bomb the server
Because Using more than 1 thread could be interesting. The take away here is that there is the need to monitor both the request queue (to try to avoid a request sitting in the queue for too long), and the executor (to try to avoid a response sitting in the queue for too long). |
HTTP/1.1 browser simulation
HTTP/2 browser simulation
Browser simulationFor browser simulation we need the concept of a Because we want a restricted number of channels for each "browser", we need to use a |
@sbordet Not sure I understand what you are saying about p=0? Are you saying that in such a state a thread should send as many requests as possible? If so, that is entirely opposite to the design brief for this generator. I specifically requested a generator where the rate at which requests are generated is not a function of any aspect of how they are processed. The generator should be able to be set at a given requests rate and must generate that rate of requests regardless of how many threads are needed or how much latency is taken to process each request. Perhaps there can be a monitor to detect long queues or empty thread pools to warn that the generator is above capacity, but it should never wait for a response before sending the next request. I see only 2 primary configurations: number of channels; request rate (either per channel or in total). Thread pool size is an implementation detail. |
Note also that Given that, I'd probably like the request rate to be specified as the total request rate and the generator does the Maths to work out what the rate for each |
@gregw, There is a relation between the request rate and how they are processed. The sender threads, the number of channels, and the pause between request generation ( Whether the library will be able to compute those numbers I'm not sure. One use case is where the load tester limits What I am saying is that I want easy library methods to set |
Let's have a chat about this next week. I'm very much opposed to the style
of load generator you are describing a they are plenty of them out there
already and we do not need our own.
My brief for this load generator was for it to produce load at a configured
specific rate entirely independent of server processing time. The server
processing time and error rate are measured to determine if an acceptable
qos is achieved.
Of course there will be configurations that are impossible for the
generator to achieve.
P=0 tells us very little. It is measure of latency at a load that is
achieved with a dynamically determined request rate that is a function of
latency. Who knows what steady state of such a feedback loop means. It may
even be a local maxima in the functions optimization
On 25 Feb 2017 19:43, "Simone Bordet" <[email protected]> wrote:
@gregw <https://github.com/gregw>, p=0 means indeed as many requests as
possible. There is a distinction between how many requests you can
*generate* per second, and how many you can actually *send* (over the
network) per second.
Consider the case with just 1 channel and HTTP/1.1: network latency and
server processing will dictate the max request rate - no matter what you
specify on the client.
There *is* a relation between the request rate and how they are processed.
The sender threads, the number of channels, and the pause between request
generation (t, c and p) are the parameters that allow you to reduce at a
minimum the relation; given the right numbers for those parameters the
relation may be absent or very weak, which is what we want - and therefore
we need to be able to tune those parameters.
Whether the library will be able to *compute* those numbers I'm not sure.
One use case is where the load tester limits c (browser simulation), and
with that fixed the load tester cannot choose an arbitrary p.
Computing an optimal t may also be challenging (I've done that in the past
and it's difficult to handle spikes and smooth out oscillations that the
system may produce).
What I am saying is that I want easy library methods to set t, c and p,
because that is what I want to tune. Paired with feedback about the request
queuing I will have a clear understanding of what the load generator is
capable of and I will know if it's exceeding its capacity.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEUrfrksiQtrZ3UsxfakH-SNsHA2GOzks5rgAXUgaJpZM4MLfM1>
.
|
@gregw We are on the same page wrt to produce a load that is independent from the server. The problem is that the load tester cannot choose an arbitrary rate
That is my point: we need a feedback from the load generator that tells whether it can or not do what the load tester asked given the parameters it was configured with. If not, the load tester must be able to change the parameters, which are I don't see any issues with the goals of this library to have those 3 parameters. Reasoning at |
Definitely agree on |
I think this has been solved. |
Loud thoughts about the requirements and the design of this library.
The text was updated successfully, but these errors were encountered: