With the basics of Simulations, Scenarios, Virtual Users, Sessions, Feeders, Checks, Assertions and Reports down – it’s time to think about what to load test and how.
Will start with a test that tries to mimic the end user experience. That means that all the 3rd party javascript, css, images etc should be loaded. It does not seem reasonable to say our loadtest performance was great but none of our users will get a responsive app because of all those things we depend on (though, yes, most of it will likely already be cached by the user). This increases the complexity of the simulation scripts as there will be lots of additional resource requests cluttering things up. It is very important for maintainability to avoid code duplication and use the singleton object functionality available.
Using the recorder
As I want to include CDN calls, I tried the recorder’s ‘Generate CA’ functionality. This is supposed to generate certs on the fly for each CN. This would be convenient as I could just trust a locally generated CA and not have to track down and trust all sources. Unfortunately I could not get the recorder to generate its own CA, and when using a local CA generated with openssl I could not feed the CA password to the recorder. I only spent 15 mins trying this until reverting to the default self signed cert. Reviewing Firefox’s network panel (Firefox menu -> Developer -> Network ) shows any blocked sources which can then be visited directly and trusted with our fake cert (there are some fairly serious security implications of doing this, I personally only use my testing browser (firefox) with these types of proxy tools and never for normal browsing).
The recorder is very handy for getting the raw code you need into the test script, it is not a complete test though. Next up is:
- Dealing with authentication headers – The recorded simulation does not set the header based on response from login attempt
- Requests dependent on the previous response – The recorder does not capture this dependency it only see the raw outbound requests so there will need to be consideration on parsing results
- Validating responses
Dealing with authentication headers
The Check API is used for verifying that the response to a request matches expectations and capturing some elements in it.
After half an hour or so of playing around the Check API, it is behaving as I want thanks to good, concise doc.
.exec(http("login-with-creds")
.post("/cm/login")
.headers(headers_14)
.body(RawFileBody("test_user_creds.txt"))
.check(headerRegex("Set-Cookie", "access_token=(.*);Version=*").saveAs("auth_token"))
The “.check” is looking for the header name “Set-Cookie” then extracting the auth token using a regex and finally saving the token as a key called auth_token.
In subsequent requests I need to include a header containing this value, and some other headers. So instead of listing them out each time a function makes things much neater:
def authHeader (auth_token:String):Map[String, String] = {
Map("Authorization" -> "Bearer ".concat(auth_token),
"Origin" -> baseURL)
}
//...
http("list_irs")
.get(uri1 + "/information-requests")
.headers(authHeader("${auth_token}")) // providing the saved key value as a string arg
Its also worth noting that to ensure that all this was working as expected I modified /conf/logback.xml to output all HTTP request response data to stdout.
Requests dependent on the previous response
With many modern applications, the behaviour of the GUI is dictated by responses from an API. For example, when a user logs in, the GUI requests a json file with all (max 50) of the users open requests. When the GUI received this, the requests are rendered. In many cases this rendering process involves many more HTTP requests that depending on the time and state of the users which may vary significantly. So… if we are trying to imitate end user experience instead of requesting the render info for the same open requests all of the time, we should parse the json response and adjust subsequent requests accordingly. Thankfully gatling allows for the use of JsonPath. I got stuck trying to get all of the id vals out of a json return and then create requests for each of them. I had incorrectly assumed that the EL Gatling provided ‘random’ function could be called on a vector. This meant I thought the vector was ‘undefined’ as per the error message. The vector was in fact as expected which was clear by printing it.
//grabs all id values from the response body and puts them in a vector accessible via "${answer_ids}" or sessions.get("answer_ids")
http("list_irs")
.get(uri1 + "/information-requests")
.headers(authHeader("${auth_token}")).check(status.is(200), jsonPath("$..id").findAll.saveAs("answer_ids"))
//....
//prints all vaules in the answer_ids vector
.exec(session => {
val maybeId = session.get("answer_ids").asOption[String]
println(maybeId.getOrElse("no ids found"))
session
})
To run queries with all of the values pulled out of the json response we can use the foreach component. Again got stuck for a little while here. Was putting the foreach competent within an exec function, where (as below) it should be outside of an exec and reference a chain the contains an exec.
val answer_chain = exec(http("an_answer")
.get(uri1 + "/information-requests/${item}/stores/answers")
.headers(authHeader("${auth_token}")).check(status.is(200)))
//...
val scn = scenario("BasicLogin")
/...
.exec(http("list_irs")
.get(uri1 + "/information-requests")
.headers(authHeader("${auth_token}")).check(status.is(200), jsonPath("$..id").findAll.saveAs("answer_ids"))),
.foreach("${answer_ids}","item") { answer_chain }
Validating responses
What do we care about in responses?
- HTTP response headers (generally expecting 200 OK)
- HTTP response body contents – we can define expectations based on understanding of app behaviour
- Response time – we may want to define responses taking more than 2000ms as failures (queue application performance sales pitch)
Checking response headers is quite simple and can be seen explicitly above in .check(status.is(200). In fact, there is no need for 200 checks to be explicit as “A status check is automatically added to a request when you don’t specify one. It checks that the HTTP response has a 2XX or 304 status code.” — checks.
HTTP response body content checks are valuable for ensuring the app behaves as expected. They also require a lot of maintenance so it is important to implement tests using code reuse where possible. Gatling is great for this as we can use the scala and all the power that comes with it (ie: reusable objects and functions across all tests).
Next up is response time checks. Note that these response times are specific to the HTTP layer and do not infer a good end user experience. Javascript and other rendering, along with blocking requests mean that performance testing at the HTTP layer is incomplete performance testing (though it is the meat and potatoes).
Gatling provides the Assertions API to conduct checks globally (on all requests). There are numerous scopes, statistics and conditions to choose from there. For specific operations, responseTimeInMillis and latencyInMillis are provided by Gatling – responseTimeInMillis includes the time is takes to fully send the request and fully receive the response (from the test host). As a default I use responseTimeInMillis as it has slightly higher coverage as a test.
These three verifications/tests can be seen here:
package mwc_gatling
import scala.concurrent.duration._
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import io.gatling.jdbc.Predef._
class BasicLogin extends Simulation {
val baseURL="https://blah.mwclearning.com"
val httpProtocol = http
.baseURL(baseURL)
.acceptHeader("application/json, text/plain, */*")
.acceptEncodingHeader("gzip, deflate")
.acceptLanguageHeader("en-US,en;q=0.5")
.userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:43.0) Gecko/20100101 Firefox/43.0")
def authHeader (auth_token:String):Map[String, String] = {
Map("Authorization" -> "Bearer ".concat(auth_token), "Origin" -> baseURL)
}
val answer_chain = exec(http("an_answer")
.get(uri1 + "/information-requests/${item}/stores/answers")
.headers(authHeader("${auth_token}")).check(status.is(200), jsonPath("$..status")))
val scn = scenario("BasicLogin")
.exec(http("get_web_app_deps")
//... bunch of get requests for JS CSS etc
.exec(http("login-with-creds")
.post("/cm/login")
.body(RawFileBody("test_user_creds.txt"))
.check(headerRegex("Set-Cookie", "access_token=(.*);Version=*").saveAs("auth_token"))
//... another bunch of get for post auth deps
http("list_irs")
.get(uri1 + "/information-requests")
.headers(authHeader("${auth_token}")).check(status.is(200), jsonPath("$..id").findAll.saveAs("answer_ids"))
//... now that we have a vector full of ids we can request those resources
.foreach("${answer_ids}","item") { answer_chain }
//... finally set the simulation params and assertions
setUp(scn.inject(atOnceUsers(10))).protocols(httpProtocol).assertions(
global.responseTime.max.lessThan(2000),
global.successfulRequests.percent.greaterThan(99))
}
That’s about all I need to get started with Gatling! The next steps are:
- extending coverage (more tests!)
- putting processes in place to notify and act on identified issues
- refining tests to provide more information about the likely problem domain
- making a modular and maintainable test library that can be updated in one place to deal with changes to app
- aggregating results for trending and correlation with changes
- spin up and spin down environments specifically for load testing
- jenkins integration