Site icon Brent Ozar

Why I Broke the #SQLpass Rules (And How It Worked)

I like trying new ways of delivering presentations, pushing the boundaries of what’s normal at conferences. Over the last couple of years, I’ve been trying a few different things:

This year at the PASS Summit in Charlotte, I wanted to try another new experiment – simulate what a DBA’s life is really like onstage. I brought them in to learn about performance troubleshooting basics, just like they’d watch a “normal” session at home, but then I interrupted them and took them completely off track. My second presentation slide is always my About-Me slide, and while I explained a little about myself, a little Outlook toast popped up:

Ha ha, that dumb Brent left his Outlook open.

I pretended not to notice, and it had exactly the effect I wanted. A few chuckles rippled through the audience and people pointed at the screen. “Ha ha, Brent made a rookie mistake and left his Outlook open.” Then, a few seconds later, as I continued to blissfully elaborate on my career, another toast popped up:

Oh, wait – is he pranking us?

I turned around, looked at the screen, and sighed. I talked about what it’s like to be a DBA, to constantly get interrupted by people who want you to stop and fix things. I asked the audience what they do when emails like these arrived, and we built a slide together about how to do performance troubleshooting.

And then I told them they were all wrong, because there was a new sheriff in town – my brand-new sp_AskBrent™, an easier way to answer those emails.

After a few minutes of demos, I switched back into the slide deck to talk about the internals, and wouldn’t you know it, I got another email:

Dang it, they want help in production again?

I used that incoming email to talk about how to schedule the proc with a SQL Server Agent job, and then query past data with the @AsOf parameter. A few more slides and popups later, I demoed StackExchange’s new open source Opserver, a dashboard for real-time SQL Server performance troubleshooting.

The Rule I Broke and Why

PASS presenters are supposed to upload their slide decks ahead of time for open criticism. (Wait, did I word that right? I’m not too good at wording that kind of thing.) I uploaded a nearly-empty slide deck – a title slide, my bio, and the PASS-required slides. See, if you sit through 15 sessions at PASS, you should see the same slides 15 times explaining that PASS has a bunch of resources, including what appears to be a new pregnancy test:

The Session Recordings tested positive, apparently

The reviewers pointed out that my slide deck was completely devoid of content. I explained (repeatedly, lying each time) that my session would be all demos, and eventually they bought it. I feel bad for lying to the volunteers, because they’re doing the thankless, Herculean, and admirable task of trying to improve your Summit experience. They don’t want you to see crappy sessions.

I broke that rule because I wanted to surprise you with the Outlook toast popups and the two new goodies. Neither of them would be publicly available ahead of time – I unveiled sp_AskBrent™ onstage at PASS, and the Stack guys unveiled Opserver at the Velocity conference. It was a really fun week to be a SQL Server admin.

How The Evaluation Scores Turned Out

Spoiler: I didn’t make the Top 10 this time around. The rest of this post is not a justification about my lower scores – I really want to give you a peek at what it’s like to craft a presentation, and how I like to study and interpret feedback to keep raising my game.

I knew going in that I was going to surprise people with something slightly different (and hopefully better) than what they expected from the abstract. That can be a real kiss-of-death for your eval scores – particularly for these three questions, and I knew I’d be in trouble on these. (I’m leaving out the comments that were all “BRENT IS A PRESENTING GOD” because you knew that already.)

How would you rate the accuracy of the session title, description, and experience level to the session presented?

Totally fair criticisms – I pulled a bait-and-switch, designing an abstract that made it sound like I was going to teach attendees the old, hard way of performance tuning, and instead I gave them an easy button. Some people want the old, hard way, and I totally knew they’d ding me. I’m comfortable with that.

How would you rate the quality of the presentation materials?

Exactly what I expected – I knew that the kinds of attendees who want to take the slides home and re-present my material weren’t going to like the session. It’s really hard to mimic what I do, and my slide decks have always had that challenge. I’m comfortable with that feedback too. This session was more of a performance than book reading, and the presentation materials weren’t the point of this session.

This is the tough part about working with a single set of eval questions that are standardized across an entire conference – when you want to break the rules, you’re going to pay the price on eval scores. I really wanted to make the top 10 again this year, but I was afraid I wouldn’t because of these two above questions. Sure enough, I didn’t – I’m somewhere in the 30s-40s depending on how they filter out low response count surveys. As a presenter, I judged my success on this session via the comments, and they were all good, so I’m happy there. Last question, and I knew the bait-and-switch would be particularly challenging here:

Did you learn what you expected from this session?

Attendees should have answered this, “Hell no,” but they were kind to me. I appreciated that, because really, the answer should have been no. This session was a real bait-and-switch to show them a shortcut to the old way. Here’s a couple of examples of how satisfied attendees can end up giving lower scores on those types of questions:

Eval 1
Eval 2

Note that the “Did you learn what you expected” is a 1-3 scale, with 1 being the best. Both attendees dinged me (rightfully) on the session title and description, and the second attendee gave me the worst possible score on “did you learn what you expected”. That’s absolutely and completely fair, and that’s how I’d rate me too.

Would I Change Anything About the Session?

I talked to a few attendees privately afterwards and grilled them in detail about the session. I learned that I should mimic a handful of real-world troubleshooting scenarios – have a few of the emails pop up, and for each scenario, troubleshoot it down to a runaway query, outdated statistics, underpowered TempDB drives, etc – so people understand even more about how to use sp_AskBrent™. Genius – so I’ve adapted the session to use those additional scenarios.

I wouldn’t have changed the bait-and-switch and the Outlook-popup-surprise at all though – I know it penalized my eval scores, but the total effect was completely worth it.

I can’t always use that technique, though, just like I can’t always build all-new scripts for each presentation season, nor can I hand out paper demos every time. I’ve gotta know what works best for my style, the room size, and the subject matter I’m conquering. For example, at this week’s SQLRally pre-cons on hardware, storage, and virtualization, my slides are 100% bullet-fests because I want the attendees to take the ~500 slides home and use them for reference material. But the very next morning, for a 1-hour session on performance troubleshooting, I’m doing the sp_AskBrent™ surprise session with email popups again.

And besides, people who loved this session will be exactly the kind of people who love our training.

Exit mobile version