Hello Members!
I have explained to some extent the first question. Would you participate and reply
on both the questions.
The intermittent bug is a invisible bug. This nightmare is hard to observe, yet often enough that it can’t be ignored. You can’t debug because you can’t find it.
You will start to doubt it after a long time. It obeys the same laws of logic as else. What makes it difficult is that it occurs only under unknown conditions. Try to record the circumstances under which the bug does occur, so that you can guess what the variability it has.
Intermittent Bugs are the type of bugs which are not having consistent behavior. Means if you execute same test 2 times on any application, each time it will give you some different result. May be you get it again on executing 3rd time.
Because of the Intermittent behavior of bug, it never be easy to reproduce.
What we can do only is : Write the exact steps of testing with the Test Data entered, then note the final result of 1st testing, if possible, attache the screen shot. Do the same 2nd time and write the result of 2nd time, also screen shot. So like this we can get some results with us and based on that we can reproduce the bug.
Please feel free to ask any doubts regarding Software Testing & ISTQB.
lol, this again comes back down to what your expected traffic profile is and what is the risk you are way off on the numbers and what kind of system you have. If it is a closed system where you can manage the user base, then this is better than if it is an open system, where the traffic is uncontrolled.
Also, a spike test might be relevant if you have a predicted large number of requests in a short space of time to consider, this would probably be the peak load or definitely push the system beyond the peak, again all depends on the sut. A black friday deal website for example would need this tested to death nearer the black friday event especially if there are good bargains to be had, and the marketing campaign brings people from far and wide.
Another worry may be brought to light when the data increases to say 3-5 years worth of data, if the response times slow down then, would this bring the contingency levels down below the peak load level.
Then again if the peak load will never be breached and the system performs ok up to 30% beyond the peak level for a defined period of time, then all might be good and the system has been designed very well to meet these limits.
It sadly all depends what is acceptable, it sounds like you may be getting close to the vendors limits and they wont want you going higher, it might take you to say we need to be able to sustain 100% on top of peak for x minutes to pass a stress test, and see if they start to sweat/cry
With all of this, it’s all about managing the risk, and not being sat there on go live day getting a peak load that kills the system and gives you all a headache that could have been avoided.