Timotej Kovacka

Software engineer in London

profile picture

K6 Our Eleventh-Hour Performance Testing Hero

The Ticking Clock

Picture this: It’s a Wednesday afternoon, and I’m sitting in a meeting room with Jake, my senior colleague and mentor. He’s just dropped a bombshell. Our new product, the one we’ve been pouring our hearts into for months, is weeks away from launch. The catch? We need comprehensive performance tests, and we need them yesterday.

“We need to ensure it can handle 100,000 concurrent users without breaking a sweat,” says Jake, running his hand through his salt-and-pepper hair. His expression is a mix of concern and determination.

I nod, trying to look confident while my mind races. Our flagship product uses Gatling for performance testing, and while it’s solid, I’ve heard Jake muttering about its limitations for weeks now. My own limited exposure to it hasn’t filled me with confidence either.

The Quest Begins

Back at our shared workspace, I turn to Jake. “So, Gatling?” I venture, already knowing the answer from his grimace.

Jake shakes his head. “It’s served us well on the flagship, but for this? With our timeline? We’d be setting ourselves up for failure.” He leans back in his chair, thinking. “We need something faster, more flexible. But what?”

As if on cue, Mike, our DevOps wizard, pokes his head around the corner. “Did someone say ‘faster’ and ‘more flexible’? Sounds like you need to check out K6.”

Jake and I exchange glances. “K6?” I ask.

Mike grins, that familiar glint of excitement in his eyes. “Trust me, it’ll change your life. Or at least your performance testing. Same thing, right?”

Enter K6: Our Dark Horse

After a frenzied hour of Googling, YouTube tutorials, and Mike’s enthusiastic explanations, we decide to give K6 a shot. It promises speed, flexibility, and a syntax that won’t make us want to tear our hair out. Plus, it plays nice with JavaScript, which means we can leverage our existing skills.

“Alright,” Jake says, a hint of his usual confidence returning. “Let’s do this. We’ve got 48 hours to get a proof of concept up and running. If it works, great. If not…” he trails off, and I finish for him:

“We’ll be brushing up our resumes?”

He chuckles. “Something like that. Let’s hope it doesn’t come to that.”

The K6 Revelation

Fast forward 24 hours (fueled by pizza, energy drinks, and the fear of disappointing our project lead), and we’ve got our first K6 test script up and running. It’s a thing of beauty:

import { browser } from 'k6/experimental/browser'
export const options = {
scenarios: {
ui: {
executor: 'shared-iterations',
options: {
browser: {
type: 'chromium',
},
},
},
},
thresholds: {
browser_web_vital_cls: ['p(90)<0.1'],
browser_web_vital_fid: ['p(75)<100'],
browser_web_vital_lcp: ['p(75)<2500'],
},
}
export default async function () {
const page = browser.newPage()
try {
await page.goto('https://our-awesome-new-product.com')
page.locator('#login-button').click()
page.locator('#username').type('testuser')
page.locator('#password').type('testpass')
page.locator('#submit-login').click()
page.waitForSelector('#dashboard', { state: 'visible' })
// Simulate user interactions
page.locator('#create-new-item').click()
page.locator('#item-name').type('Performance Test Item')
page.locator('#save-item').click()
page.waitForSelector('#success-message', { state: 'visible' })
} finally {
page.close()
}
}

“It’s alive!” I shout as we run our first test. Jake leans in, his eyes glued to the screen. The results start pouring in:

data_received..................: 2.4 MB 80 kB/s
data_sent......................: 142 kB 4.7 kB/s
http_req_blocked...............: avg=1.01ms min=0s med=6µs max=31.51ms p(90)=4.42ms p(95)=5.77ms
http_req_connecting............: avg=522.72µs min=0s med=0s max=16.74ms p(90)=2.34ms p(95)=3.37ms
http_req_duration..............: avg=233.49ms min=188.63ms med=226.13ms max=419.54ms p(90)=292.32ms p(95)=331.95ms
http_req_failed................: 100% ✓ 300 ✗ 0
http_req_receiving.............: avg=13.43ms min=33.9µs med=251.1µs max=192.48ms p(90)=56.33ms p(95)=74.69ms
http_req_sending...............: avg=48.39µs min=12.5µs med=41.8µs max=385.7µs p(90)=75.87µs p(95)=106.64µs
http_req_tls_handshaking.......: avg=434.71µs min=0s med=0s max=14.3ms p(90)=1.77ms p(95)=2.8ms
http_req_waiting...............: avg=220ms min=188.37ms med=213.89ms max=360.25ms p(90)=262.24ms p(95)=291.76ms
http_reqs......................: 300 9.991908/s
iteration_duration.............: avg=1.96s min=1.57s med=1.9s max=3s p(90)=2.39s p(95)=2.57s
iterations.....................: 30 0.999191/s
vus............................: 1 min=1 max=1
vus_max........................: 1 min=1 max=1

Jake lets out a low whistle. “Well, I’ll be damned,” he murmurs, a slow smile spreading across his face. “This might actually work.”

Mike, who’s been hovering nearby, pumps his fist in the air. “Told you! K6 for the win!”

It’s not perfect, but it’s a start, and it’s more than we had 24 hours ago.

Scaling New Heights (and Hitting Some Walls)

With our proof of concept in hand, Jake and I dive into scaling up our tests. Mike’s never far away, always ready with a suggestion or a quick fix when we hit a snag. We quickly realize that K6’s cloud execution and distributed testing capabilities are going to be crucial in our quest to hit that 100,000 concurrent user target.

We start tweaking our Kubernetes setup, and before long, we’re running tests that make our previous efforts look like a warm-up:

apiVersion: k6.io/v1alpha1
kind: TestRun
metadata:
name: k6-browser-test
spec:
parallelism: 50
script:
configMap:
name: performance-test-script
file: test.js
resources:
requests:
cpu: '6'
memory: 12Gi

As we watch the tests run, scaling up to thousands of virtual users, I can’t help but feel a mix of excitement and trepidation. Jake, usually the picture of calm, is on the edge of his seat. We’re pushing our new product to its limits, uncovering potential issues before they can impact real users.

But reality has a way of tempering expectations. As we scale up, we hit a wall. With 50 pods running on solid machines (each with 12Gi of memory and 6 CPU cores), we manage to reach 3,000 concurrent users with browser-based tests. It’s a far cry from our 100,000 user goal, but it’s still a significant achievement given our timeline and resources.

“Well,” Jake says, leaning back in his chair, “looks like we’ve found the limits of our current setup.”

I nod, a mix of disappointment and determination. “But 3,000 is nothing to sneeze at. It’s a solid start.”

Mike, who’s been monitoring the tests with us, chimes in. “You know, there might be ways to optimize this further. We could look into headless browser testing, or even scriptless load testing for some scenarios. But that’s probably a project for another day.”

Jake and I exchange glances. We both know we’ve only scratched the surface of what K6 can do, and there’s a lot more to explore.

“Looks like we’ve got material for a few more late nights,” Jake says with a wry smile, as we start scribbling down our new set of challenges on the whiteboard.

Mike adds, “And definitely a few more blog posts. The community would love to hear about your browser testing at this scale!”

The Road Ahead

As I write this, we’re still digesting the results of our scaling efforts. While we didn’t hit our moonshot goal of 100,000 users, K6 has proven to be a game-changer, allowing us to iterate quickly and scale beyond what we thought possible in such a short time frame.

Jake, who was initially skeptical, is now K6’s biggest champion. “In all my years of performance testing,” he told me yesterday, “I’ve never seen a tool this powerful and easy to use. And the fact that we got this far with browser-based tests? Impressive.”

And Mike? Well, he’s already sketching out plans for how we might push our testing even further. “Next time,” he says with a glint in his eye, “we’ll shoot for the stars.”

To all my fellow devs out there facing similar challenges, especially those on small, nimble teams like ours, I say this: don’t be afraid to try something new, even when the pressure’s on. Sometimes, the right tool can turn an impossible task into an exciting opportunity – even if you don’t quite reach your stretch goals on the first try.

As for us, we’re already planning our next steps with K6. How can we optimize our tests to handle more users? What other features of K6 can we leverage? And most importantly, how can we translate these impressive numbers into real-world performance improvements for our users?

Stay curious, keep innovating, and remember: when in doubt, listen to your DevOps guy. They might just save your bacon.

Happy testing, folks! And stay tuned – something tells me our journey with K6 is just beginning.