How I Use a Mac but Work on SQL Server

Blog Posts
No Comments

Kind of a convoluted title, I know, but I get questions like this every now and then:

Chris and I had a good conversation on Twitter, and I figured it’s time to post an updated version of how I work.

I have a “fake job” – I’m a consultant.

If you’re a full time employee somewhere, you probably manage the same servers every day. Your main applications probably consist of Outlook, SSMS, a team chat app, a monitoring tool, and a web browser, all open to the same stuff all day, just alt-tabbing around between different windows.

On the road in the Manchester airport

My job is a little different because:

  • I’m not the primary line of support for any SQL Servers
  • When I’m looking at SQL Server data, it’s through the lens of custom apps Richie wrote
  • I jump around to a different client every 3 days
  • A lot of my time is spent building & delivering training material

So there probably isn’t going to be a lot of actionable info in this post for you, but hey, y’all keep asking, so I’ll explain, hahaha.

My favorite thing: web apps.

Whenever practical, I try to use apps in a browser tab rather than a downloadable executable. Jeremiah Peschka really motivated me to try this – years ago, he really encouraged me to try GMail in a browser rather than Outlook or Apple Mail. I hated it at first, but now I adore it.

When I open Chrome, these tabs show up:

  • Email & calendar: GMail – try to learn one new keyboard shortcut a day, and you’ll be an unstoppable ninja in a month.
  • Tasks: RememberTheMilk.com – I still practice Inbox Zero with this approach. It involves a lot of replies that start with, “This sounds like a really interesting discussion, but I’m slammed right now, so here’s who you should talk to instead…”
  • Reading blogs: Feedly – and here are my subscriptions if you wanna follow along.
  • Amazon Music – even though I have a ton of music stored locally, I’ve been gravitating toward this because it’s included with Prime, and the web UI is pretty good.
  • Power BI – just last week, I was able to switch over from mainly using the Power BI Desktop in a Windows VM, up to the cloud-based version instead. I still have to open the desktop for editing the report I use a lot, but I can consume the data and give client advice via a browser instead. (I was really, really happy the day I pulled that off.)
  • WebEx – for client work. Our SQL Critical Care® is about mentoring, so we’re walking people through their own servers, showing them what to look at while we investigate the root cause together. We don’t get VPN access or anything like that, so it keeps the client work simple and fast.

And then I try to use web services for as much as possible, like Expensify, Quickbooks, etc. If my laptop bites the dust when I’m on the road, or if I need to get a few minutes of work in on vacation, then any web browser will get me most of the way there.

Web app not good enough?
Then a Mac OS X app.

Here’s my dock with my most common apps:

 

From left to right:

  • Finder – the Mac OS X equivalent of Windows Explorer.
  • Chrome
  • VMware Fusion – to run virtual machines. I do a lot of work in the cloud, but 2 local VMs get heavy use: SQL2017 to build & show demos, and SQL2019 for R&D work. In very rare cases where a client wants me to VPN into their environment, I’ll build a separate VM for each client to avoid VPN client hassles. (Why VMware and not Parallels? I just started with it because I used to be a VMware admin, and it works fine.)
  • PowerPoint & Excel – yeah, technically there are online equivalents of these, but I haven’t been satisfied with them, especially with the amount of training classes I teach.
  • TweetBot – whenever I hear people complaining about ads in their Twitter feed, or out-of-order tweets, or likes showing up, I just shake my head. Pay $10 and make that garbage go away. Plus, powerful filters & muting rules keep you blissfully ignorant of the rants.
  • Slack – although now that I think about it, I might be able to switch over to the web version of that now.
  • Remote Desktop – because a ton of my work is in the cloud, and the official Microsoft RDP client makes it easy to move files around.
  • Postico – like SSMS for PostgreSQL.
  • Textmate – my favorite text editor. I try to write my T-SQL queries from scratch here, without IntelliSense, just to see if I can, hahaha.
  • Github – for source control of the First Responder Kit, SQL ConstantCare®, internal apps
  • Download folder – shortcut because when you work in GMail, you end up downloading a lot of attachments and editing them locally, like signing PDFs.
  • Trash can – I have no idea why I left that icon in the dock, now that I see it.

No other choice? Windows VM.

I don’t dislike Windows – it’s totally fine – I just don’t want to open a VM if I don’t have to. These days, I only start a VM under 2 scenarios:

When I need to build or deliver SQL Server demos – I fire up a VM with SQL Server 2017 or 2019 running. My training classes generally involve performance tuning servers, indexes, and queries. In theory, I could run SQL Server in a Docker container and run queries in Azure Data Studio. In practice, ADS’s execution plan experience isn’t quite there yet, and I want the students to see the same user interface they’re used to using every day (SSMS).

When I need to edit Power BI reports – despite thousands of votes, Microsoft doesn’t plan on bringing Power BI Desktop to the Mac. It’s by far and away the top-voted not-planned item, and if it hadn’t been shelved as not-planned, it’d still be on the leaderboard for the top requested features overall:

Closed as “works on my Windows machine”

So I have to fire up a Windows VM when I want to edit Power BI files.

This would be different if I had a different job.

If I was a remote DBA contractor, like if I had to regularly jump in and fix broken Agent jobs, then I’d probably have a lot more VMs. I’d still aim for one VM per client just because VPN software can be so terrible. I wouldn’t want one client’s VPN update to hose someone else’s connection.

If I was a full time DBA for a company with only a handful of production servers, or a development DBA doing performance tuning work on just a handful of applications,  I’d still have a Mac as my primary desktop, but I’d use a jump box to run SSMS and SentryOne Plan Explorer.

If I was a full time DBA with dozens or hundreds of SQL Servers, I’d probably switch back to Windows and focus on automating my work with PowerShell.

Read the Comments

Building an Incoming Flights Display

Blog Posts
5 Comments

We live on the flight path to San Diego airport, and we like watching planes as they go by.

One afternoon, we stumbled into the bar Nolita Hall. It’s also on the flight path, and they have skylights so you can watch the planes as they fly overhead. They have a big split-flap display behind the bar, and the bottom 3 lines show the next incoming flight:

We talked to the staff and found out that their custom split-flap display was made by Oat Foundry. There’s a similar consumer equivalent called Vestaboard, but two little problems with that: it’s a couple thousand bucks, and it’s not coming out until fall 2019.

Yeah, so, no.

So we built a low-budget version instead.

FlightAware.com is a flight monitoring site with a really simple API. You can call the AirportBoards method, filter for a specific airport and either inbound or departing flights, and get back a lot of data about the next 15 flights. At the $25/mo tier, you can call it about every 20 minutes, and at the $50/mo tier, about every 3 minutes. (You could also get fancy and do different refresh rates depending on the time of day.)

To mimic Oat Foundry’s beautiful split-flap display, I found Flapper, a JQuery plugin.

My days of web development are over, so once I put the list of tools together, I hired David R. on Upwork to build a PHP page that would call FlightAware’s API, then show the next 3 incoming flights with Flapper.

This is just a picture – click on it to see the actual animation:

Chucka-chucka-chucka sound not included (yet)

Next, to display it on the wall, I got a SmartIdea projector – a phone-sized device with Android 7 (Nougat) and WiFi so I could just open a web browser and let the page refresh itself. The title says it’s 1080p, but it’s only 480p – it’s just that it accepts 1080p inputs if you wanted to get ambitious. It runs off a 5W USB power supply, and has a built-in tripod socket at the bottom. Also has automatic keystone correction, which means it doesn’t have to be centered in front of the wall (or in our case, ceiling) where you want to project.

Incoming flights projected on the ceiling

The SmartIdea projector isn’t bright, and our apartment has floor-to-ceiling windows. Fortunately, we usually leave the blinds closed behind the TV, and we wanted to project above the TV. It works well enough here, but the instant we open the blinds, the projector’s display disappears. It just can’t compete with sunlight, especially when the projector and surface are more than several feet apart.

It’s great for this use though – just a subtle listing of flights on the ceiling:

Flight details

The projector also has a remote, and I’m so happy that the up/down buttons scroll Chrome too! So I can look down to see what flights are coming up later.

The projector has one serious drawback (apart from the lack of brightness): a slow CPU. The split-flap display animations look amazing in your desktop’s browser, but they’re dead slow on the projector when it’s animating 15 flights on a page. It takes a good 30-40 seconds for it to finish flapping through the alphabet! I gotta tweak those settings.

For the code, here’s the incoming-flights Github repo.

What I’d like to add in vNext

San Diego is busy – really busy, as you can see in FlightAware’s live map of arrivals and departures. It’s the busiest single-runway airport in the US, which means all flights are coming in on the exact same approach path all day. During peak times, we see a plane go by our windows every 2 minutes.

Plus, air traffic controllers frequently reroute planes around, having them do a quick circle to get into a different spot of the lineup. You can’t have a big, heavy jumbo jet flying right up the tail of a tiny Cessna, running over it.

The approach lineup shuffles around from time to time – which means if we’re calling FlightAware’s API every 10 minutes, it can get out of date. The answer isn’t to just blindly call their API more frequently, because:

  • Bronze API: $25/mo for 2,500 API calls
    (about 3 per hour, or every 20 minutes)
  • Silver API: $100/mo for 20,000 API calls
    (about 27 per hour, or every 2 minutes)

I’d really rather not pay $100/mo for this. (Especially when Erika points out, “We could just have it show the FlightAware KSAN page for free,” but that doesn’t look nearly as cool.)

So in vNext, I want to disconnect the API calls from the page refreshes. It’ll probably look like a Lambda function that runs every minute, and decides whether to call the FlightAware API. I don’t want call it from sunset to sunrise, for example, and I wanna call it less often when it’s outside of our peak viewing hours, or when there’s low traffic. Then, write the FlightAware API response to DynamoDB (or maybe ElastiCache.)

That way, the web page can refresh every minute, and just fetch the most recent response from DynamoDB without hitting the expensive FlightAware API every time. (Pricing-wise, all of that would fit into AWS’s free tier, which is nice – so my only cost is the labor of writing the code, and the FlightAware $25/mo price.)

Read the Comments

My Home Office Setup, 2018

Blog Posts
6 Comments

We moved to San Diego and I refreshed my computer hardware, so it’s time for another update in my home office blog post series. Here’s what I’m using:

Home office 2018

Standing Desk: Xdesk Terra Pro – Expensive, but works wonderfully and is built to last. I love the 3 height presets, and I’ve got them set for sitting, standing, and standing on top of my Fluidstance balance board. (There’s also an Aeron chair out of frame – I wheel it around behind my desk when I’m not using it.)

Computer: Apple MacBook Pro 15″ (2018) – 6-core Intel Core i9, 32GB RAM, 2TB SSD, 4 pounds. CPU & storage advancements finally got the new MBP to the level where I could use it as my primary desktop. This new i9 is 15% faster on video exports than my old Mac Pro desktop. It sits on a Twelve South Curve stand.

Display: BenQ 32″ – I’ve had one since 2014, and it still works great, so I can’t bring myself to replace it. Mine is an older 2560×1440 version, but I’ve linked to its replacement, a 4K one that does 3840×2160. I only use one display – the laptop stays open only because I need a second monitor when I’m doing live webcasts. When I’m showing the students a PowerPoint deck, I like broadcasting a second smaller (like 1080p) monitor, and using my main monitor for the PowerPoint presenter view and the attendees’ Slack channel.

Around the back of the laptop:

Behind the Music

Dock: CalDigit TS3 Plus – the MacBook Pro plugs into here with a USB-C cable, and then my video, audio, Ethernet, and USB stuff plug into the CalDigit. The dock is supposed to also be able to charge the laptop at the same time, but I ran into problems with that: whenever the laptop went to sleep, the dock wouldn’t work again normally until it was unplugged from power and then plugged back in. I ended up using a separate USB-C power cable.

Audio input: Focusrite Scarlett 2i2 – inputs for pro-grade XLR microphones, output to USB. My headphones are plugged into the Scarlett, and the Scarlett also outputs a monitor feed into the headphones. This means I hear everything: my own outgoing audio (to check my levels), plus the incoming audio from other co-presenters. I use B&W P5 headphones because they’re really comfortable for extended periods, but I only use them for presenting, not music.

Microphone: Electro Voice RE-20 on a Rode PSA1 Stand – I chose this after reading Marco Arment’s microphone comparison. I’d tried a few expensive shotgun microphones mounted on stands outside of the audience’s view, and I was just never happy with the sound results. Marco writes that the RE-20 is “very forgiving of amateur mic technique,” and wow, has that been right for me. I don’t need a pop filter, and it hardly picks up on the echos in my concrete & glass rooms. I do have to stay right on top of the microphone, though.

With the microphone stand mounted on the desk, and the webcam mounted on top of the monitor, it means I don’t have to adjust anything when I raise or lower the standing desk. Everything moves along with me. The studio lights stay in one position, but they cover so much area that it doesn’t really matter.

Speakers: Audioengine A2+ – No bass whatsoever, but that’s fine for high rise life.

Input devices: Apple Keyboard and Trackpad – Apple’s input devices are polarizing: either you love huge glass trackpads, or you hate them.

Half-hour hourglass: Esington Glass – When I’m working on something, it’s easy to lose myself in focus. I’ll start digging into a client’s indexes, and next thing you know, 4 hours has gone by and I haven’t even looked at their queries. This helps me divide time in a non-intrusive way. As I’m working, I’ll glance over at the hourglass to see if I’ve run out of time on that task yet. I like this better than setting an alarm because those can be intrusive, popping in when you’re not quite at a break point. This lets me check myself at more natural break points. Plus I like having something analog on my desk.

Video and lighting setup

Another change here this year: I’ve switched from a pair of big light box diffusers to a single ring light:

One ring to light them all

Light: Neewer LED Ring Light – video bloggers rave about these for their flattering light. I didn’t really care about that – I just wanted something more compact than my old light box setup because my San Diego home office is much smaller than my Chicago one. (I have the 18″ kit, but I linked to the 12″ kit – it makes more sense for most folks.) It’s plugged into a WeMo smart plug so I can turn it on/off with Siri.

Webcam: Logitech Brio 4K – the gold standard right now. There is a huge difference between this webcam and the rest of ’em out there. The webcam is mounted in the middle of the ring light, not on top of the monitor – but for me, that’s not about making the picture look good. It’s because monitor-mounted webcams shake a little when I’m typing, leaning up against the desk. I want the picture to be rock-solid steady, so mounting the webcam on the lighting tripod does the trick. It does mean I have to raise/lower the tripod separately whenever I raise/lower the desk, though.

Here’s what it looks like in the daytime:

Would you buy a used clustered index from this man?

And here’s a webcam shot from the wee hours of the morning, with only the ring light on, and no light coming in from outside:

Logitech Brio with ring light only

The results were better with the two light boxes because they filled the room more evenly, leaving no dark shadows at the bottom right. Made it look like it was normal daylight even when I was up at 3AM. I’m fine with that tradeoff to have more space in my office though.

(No, my desk isn’t centered on the concrete, and I haven’t decided whether that bothers me enough to fix or not, hahaha. I like this desk position because it leaves me enough room in my office for a lounge chair next to the windows.)

Under the desk

I’ve mounted a few things under the desk to keep things nice and clean up above:

Yes, yes, I’ll clean up the cables later

RAID array: Blackmagic Multidock – Thunderbolt array with 4 1TB SSDs in a RAID 0 array. I use these for temporary VMs and encoding/uploading training videos. I’ve had a lot of consumer-grade external arrays over the years that kept mysteriously dropping offline under heavy load, and I finally said screw it, lemme get something seriously studio-grade. It’s wonderful and silent.

Audio gain: Cloudlifter CL-1 – basically, it amplifies microphones. The microphone plugs into this, and then this plugs into the Focusrite. The Focusrite provides electric power, and the Cloudlifter uses that electricity to amplify the signals of the Electro Voice.

I still have my old Mac Pro mounted under the desk – need to unmount that this week and eBay it.

Read the Comments

A Day in the Life of Brent Ozar: July 23, 2018 #SQLCareer

Blog Posts
1 Comment

Steve Jones asked data professionals to cover four days in our lives, so I blogged about a normal non-client-facing day, and then an unproductive day.

Today is different: I’m client-facing. We specialize in a 3-day SQL Critical Care:

  • Day 1 – we meet with the client to dig through their SQL Server together while we ask them questions about their database, indexes, queries, designs, RPO/RTO goals, hardware, and more.
  • Day 2 – we split up. The client goes back to doing their thing, and we write the findings for them. (You can see examples of the findings on that page above.)
  • Day 3 – we meet again to deliver the findings, which are a mix of consulting and training. We tell you the fastest way to relief for the pains your SQL Server is facing, and make sure you’re confident in moving forward. (Sometimes clients hire us to fix the problems directly, too, but ideally we’d rather show you how to fix the pain permanently rather than get hooked on expensive pain relief narcotics consulting.)

Today is day 1. Let’s make the magic happen.

6:00AM-6:30 – Emails. Small catch-up stuff.

6:30-7:30 – Reading and learning. Not a lot of RSS stuff from overnight (no surprise, since it’s Monday morning) but Hacker News has a really interesting discussion: The Secret Life of an Autistic Stripper. Forget the article and the inflammatory title – the comments are just interesting to read because there’s such a wide spectrum of personalities in the IT community. Answered a DBA.se question about reading the MySQL transaction log. Happy to see the photos coming in from the Chicago-Mac race, thinking back to when I did it. Saddened to read of a sailor who fell overboard and went missing, and his life vest didn’t inflate. That race is no joke.

7:30-8:00 – Breakfast. Coffee and yogurt. While eating, drool over a cocaine-tastic 1986 911.

8:00-8:15 – Email Richie. He was out last week, so I catch him up to speed on the Github issues I filed over the last week. We have 117 open issues at the moment with a variety of strategic and tactical stuff, pull requests, etc., and I’m sure he’s got a ton of notification emails, so I wanted to help prioritize stuff.

8:15-8:45 – Review client DMV data. Clients run an app that sends us an Excel spreadsheet with data from sp_Blitz, sp_BlitzCache, sp_BlitzIndex, etc. sliced and diced a few different ways – like their plan cache sorted by different metrics – plus a lot of their query plans. I spent some time with it last week, but I want to refresh my memory.

9:00-10:00 – Call starts, talk strategy. We talk about their pain points – the most important issues they want solved during the engagement – and I talk about a few big-picture challenges. For example, this client has a ~10TB database, and in the event that someone has an “oops” query like dropping a table, they want to be able to recover within 1 minute of data loss in under 1 hour of downtime.

10:15-12:00 – Analyzing query memory grants and query plans. We were called in for unpredictable, random performance slowdowns. Thanks to my research into the client’s data ahead of time, I know someone’s running queries that get a really big memory grant, but then don’t use it, and finish quickly. We run sp_BlitzCache @SortOrder = ‘memory grant’ and catch a few – including this gem where SQL Server’s cardinality estimation has gone haywire:

Microsoft, would it kill you to use commas

To be clear, it’s not really a bad query: SQL Server’s just making some really questionable guesses about how many rows will come back. In actuality, only a couple hundred thousand rows come back. We talk about the source of those queries, how to tune them, and whether we can offload them to a different server. The server’s other bottleneck is queries that go parallel to burn CPU power, so we start with sp_BlitzCache @SortOrder = ‘cpu’ and walk through some of the top offenders.

12:00-1:00 – Lunch break. Erika made chicken chili.

1:00-2:00 – Analyze VMware configuration & performance. There are multiple ways you can fix a CPU bottleneck, and the goal of the SQL Critical Care® is to find the right one for a given client. This VM’s wide NUMA configuration made things tough for VMware’s CPU scheduling, and I wanted to find out why it was built that way, and if the team was open to changing it. To learn more about this topic, check out Frank Denneman’s excellent NUMA Deep Dive series.

2:15-3:00 – Test trace flag 2335 behavior. The current production server was running this trace flag, and nobody knew why. The resulting query plans showed some really odd memory grants. We took one of the queries, then took it to a QA server where we could flip this trace flag on & off. The devil’s always in the details on that kind of thing. I’ve got the data I need to build their findings, so I bid them adieu.

After client calls finish, I need to walk away from the data for a while before I start assembling findings. I find that if I just start writing up their deliverables right away, I can’t see the forest for the trees. I do one final set of notes in the client’s file, then close it all down for the day.

3:00-3:30 – Emails & reading. I start by getting back to inbox zero, then check Feedly for updated blog posts.

3:30-4:00 – Testing ConstantCare.exe. Richie being the development machine that he is, he’s already implemented a few of my notes from this morning, and needs me to test an updated build in preparation for an early access build for clients.

And I’m out! Very productive day. Tomorrow I’ll work on their findings, and then deliver ’em on Wednesday. Thursday & Friday, I’m proctoring Drew Furgiuele’s PowerShell class while updating my slides for my upcoming Mastering Server Tuning class. Next week, I’ll be out of the office as we move to San Diego.

I do have a few other styles of days, and I’ll blog a couple of those in August:

  • Mentoring days – when I analyze SQL ConstantCare® client data in Power BI and send them advice emails
  • Onsite days – when I fly to a client’s office, assess their servers, and then teach training classes relevant to the problems they’re facing
  • Teaching days – when I host a class (but that one doesn’t lend itself well to the blogging)
Read the Comments

A Very Unproductive Day in the Life of Brent Ozar: July 18, 2018 #SQLCareer

Yesterday, I blogged what I did in a random work day per Steve Jones’ suggestion. And yesterday, I said I’d blog today as well because it was going to be a different schedule.

You shouldn’t read this. It’s useless.

It’s useless because as I look back on it, I didn’t accomplish much of anything on Wednesday, July 18th. That’s not rare – there are plenty of days where I feel like I don’t make enough progress. Thing is, Steve wanted us to share the tools we’re using and what problems we’re solving, and this post is pretty well devoid of that information because I didn’t solve problems yesterday.

I didn’t know that when I started the day, obviously.

I was going to delete this, but screw it – it’s already written, so presented for your amusement: a day when I don’t solve any big valuable problems, just tread productivity water:


4:15AM-4:30 – Emails. A quick round of emails before showering and going out for coffee.

5:15-6:00 – Reading blogs. Not a lot of new stuff posted yesterday. I answer a DBA.StackExchange.com question about setting up a readable replica and one about cross-database queries in Azure SQL DB. I refresh the tracking page for my new MacBook Pro – it’s made it from Shanghai to Chicago. C’mon, little buddy. (Sitting)

6:00-7:30 – Learning about Lambda event sources and limits. Looking at my calendar, today is an appointment-free focus day that I’ve blocked out for design work on SQL ConstantCare®’s index and query recommendations. With that in mind, I wanna think about how we do that processing. So far, most of the data is hosted in AWS Aurora PostgreSQL, and the application code runs in AWS Lambda using the Serverless framework. However, as we start to analyze query plans, I don’t want those big XML plans stored in a relational database, nor do I want the advice stored there – the relational database is just too much of a bottleneck if we wanna process a lot of plans fast. To scale the new process, could we:

  • Store each incoming execution plan as a file in S3
  • Have the new file trigger an AWS Lambda function to analyze that plan and make recommendations (and will the function finish in time, given the 300-second limit per function)
  • Store the recommendations somewhere other than a database (say, in DynamoDB or in another S3 file)

To pull that off, I need to think about whether the plan analysis functions will need any data from the database (like data about the client’s server, or rule configuration data). I don’t wanna do the architecture design, mind you – Richie’s the architect here – but I just wanna learn enough to have a good discussion with him about how we do it. I finish up my learning by searching for related terms, plus phrases like “problem” or “bug” to see issues people are running into, like this one.

7:30-8:00 – Emails. A sales prospect from yesterday signed their contract, so did some logistics with that. A training class student sent in DMV data about their SQL Server, so analyzed their plan cache and explained the issue they’re facing. FedEx emails to say my new laptop is on the truck for delivery by 10:30AM today. Woohoo!

8:00-8:30 – Break. Make a pot of coffee, down a yogurt, start the laundry.

8:30-9:00 – Emails. More sales inquiries, and an attendee question from my user group presentation last night.

9:00-10:00- Postgres query writing. In our development environment, Richie’s already got ConstantCare.exe sending in index and query plan data. I open up Postico – it’s a Postgres client for the Mac, like the equivalent of SSMS – and start spelunking around in the tables.

SQL Server DMVs stored in Postgres

This takes a second to wrap your head around, but:

  • ConstantCare.exe runs on the clients’ machines
  • It queries the clients’ SQL Servers, getting data from DMVs
  • ConstantCare.exe exports that data to JSON format, and uploads it to the cloud
  • In the cloud, AWS Lambda functions import those JSON files, and insert them into a database (AWS Aurora Postgres)

So I’m querying SQL Server DMVs, but in Postgres, and with ever-so-slightly different names (note the underscores in tables rather than sys.dm_db_index_usage_stats. At first glance, you might think, “Ah, so he’s just editing sp_BlitzIndex so it works off these new table names!” But no, we’re starting again from scratch up in the cloud because up here, we have the benefit of historical data.

For example, the first easy query I start with is warning folks when they have a heap with forwarded fetches, something sp_BlitzIndex already alerts you about. However, in the cloud, I wanna compare today’s forwarded fetches to yesterday’s. You might have forwarded fetches reported on a table that isn’t in active use anymore, but SQL Server is still reporting forwarded fetches since startup. No sense in pestering them about something that isn’t a problem.

10:00-10:30 – Breakfast, and the new laptop arrives. Erika’s up, makes eggs. It takes a special relationship to be able to live together, work together from home, and even work for the same company. We’re pretty much around each other 24/7, and it works. If she passes away before I do, I’m gonna be single for the rest of my life.

Unpack the new laptop, put on the new protective case, boot it up, and point it to my Time Machine backups on the network. Apple makes it delightfully easy to buy a new machine. (We’ll see how the Core i9 throttling issue goes, although I rarely run my laptop at 100% CPU for 20-30 minutes as shown in the video.)

10:30-11:00 – Communication. I tweet a photo of the new laptop, which leads to some discussions on Twitter and LinkedIn, and then did some emails.

11:00-11:45 – Office Hours podcast. Every Wednesday, we get together in GoToWebinar, take live questions, and fumble our way through answers. This podcast would take a lot more out of my calendar were it not for the wizardry of DigitalFreedomProductions.com. They record it, upload it to our YouTube channel, make a podcast out of it, transcribe the audio, and create a blog post with the transcription.

11:45-12:15 – Emails. Customer emails, contract negotiation with a prospect.

12:15-1:30 – Lunch. Erika and I head to Wishbone where we both get the same thing every single time: blackened chicken with sides of spinach and red beans (no rice.) We started doing Weight Watchers quite a while back, and I’ve just about hit my target weight (185), but that food is good enough (and low points enough) that I’d eat it even if I wasn’t watching my delicate figure.

1:30-2:00 – Erik’s new book arrives. All productivity stops. So much for this being a focus day.

Great Book, Erik!

2:00-3:30 – Back to Postgres query writing. Building my first proof-of-concept for the rebuild-your-heaps script. This is the first index analysis script we’ve done in Postgres, and the first one is always the hardest, putting together the right joins between the right tables, comparing to yesterday’s forwarded fetches, and looking for edge cases that would break it. I’m by no means done – I’ll need to keep going on this tomorrow, but once I’ve got the first index analysis script done, the next one will be a lot easier.

3:30-4:00 – Emails. Signing an NDA for a prospective client, answering a few mentoring questions.

4:00PM – Done. Calling it a day. Not unusual for me – I tend to get a lot done in the mornings, and then coast to a stop in the afternoons. I didn’t make as much progress as I’d like today, but that’s life. Thankfully, I have the next 2 days on my calendar blocked out for the same task, and I’ll likely do better tomorrow. I won’t blog my day again tomorrow though – I’ll hold off for a few days until next week when I’m working with a client. That’ll show a different kind of work day.

5:40-5:45 – Update SQLServerUpdates.com. I get notified that Microsoft published 2017 CU9, so hop into the home office to update the site.


So there you have it. I have no idea what you’ll think about that day, but if you enjoyed this (or if you think *I* work a lot), read Jen Stirrup’s Tuesday.

Read the Comments

A Day in the Life of Brent Ozar: July 17, 2018 #SQLCareer

Steve Jones asked data professionals to cover four days in our lives, so here goes the first post: what I did on Tuesday, July 17th, 2018. Nothing special about this day, just the day that Steve prodded me to take part, heh. My days are radically different, so I’m glad he said to do this four times – otherwise I’d feel guilty about just posting this one without more explanation. (I’ll do this again for July 18th because it’ll look totally different, but then hold off for a few days before posting another.)

4:07AM – I wake up without an alarm and do a round of emails before I hop in the shower. (I looked at the first email-sent time to figure out when this happened, hahaha.) I answer a few client and personal emails – things that are easy to tackle from the phone. After a shower, I walk to my nearest coffee shop, grab a big cup, and come back to my home office to work.

4:45-6:45 – Learning. Read the blogs from overnight – in addition to the 362 SQL Server blogs I subscribe to (OPMLFeedly), I also read HackerNews and a lot of non-technical stuff. I tend to bookmark the very best (or most interesting) stuff on my Pinboard feed, and some of those end up in our Monday newsletters. Microsoft published a couple of cumulative updates, so I post them on SQLServerUpdates.com.

(As I mentioned in my interview with Kenneth Fisher, I try to reserve the pre-7AM time for learning. Today it was reading, but sometimes it’s opening up SSMS and trying to understand something that has been stumping me for a while, or learning how PostgreSQL execution plans work, or whatever.)

6:45-7:00 – Breakfast. A fresh pot of coffee and Stonyfield yogurt. Michael Weston, look out.

7:00-9:00 – T-SQL coding. I need to do design work on SQL ConstantCare®’s new index functionality, so to get in the frame of mind, I design & code an sp_BlitzIndex change I’ve been wanting to make for a while. In building the script for Erik to test the changes, I also update my Mastering Index Tuning class demos about the problems with heaps.

So are the hours of my day

9:00-10:00 – Communication time. I answer a few mentoring emails from SQL ConstantCare® clients. I break some bad news to an upcoming client: they hired us for performance tuning, but while analyzing their data for that engagement, Erik found out they have database corruption across multiple AG replicas. In more enjoyable news, I answer a DBA.se question that happens to pop up in our company Slack room. (We get notifications for new SQL Server questions, so if they look fun, we jump in. That one involved SQL Server on a forklift, so yes, I was intrigued.)

10:00-10:30 – Break. Walk around, make coffee, read blogs. I see my first Minerva Blue Porsche 911 and  then wipe the drool off my keyboard and monitor. The audio of it driving around is wonderful.

10:30-12:00 – SQL ConstantCare® design. We’ve got a new version of ConstantCare.exe that collects index and query metrics, then tells you what index and query changes to make in order to get better performance. Started designing the business logic for modifying SQL Server’s missing index recommendations and building better ones. While designing it, I review the past sp_BlitzIndex data for several clients, thinking about how I would have built rules to handle their index recommendations. I store my thoughts in a Github issue.

12:00-12:30 – Client call. I’m doing a 3-day SQL Critical Care® with a company next week, but looking at their DMV data, the cause of their really nasty RESOURCE_SEMAPHORE issue was immediately obvious. We talked through it so they could start researching the source of those queries before our engagement.

12:30-1:30 – Lunch. Erika & I headed out to Falafel & Grill. When you work at home, it’s nice to get out and enjoy some fresh air.

1:30-2:00 – Order taking. A few sales prospects emailed in about arranging future work, so I coordinated dates & contracts. I hesitate calling this sales because I don’t do a lot of salesing, just responding and going, “Yep, that’s a good fit, here’s a contract with our next available dates.” The vast majority of our work involves urgent performance emergencies, so we’re usually only booked about 2-3 weeks in advance – really short for a consulting firm. As a result, I’m totally at peace with only having 2-3 weeks of work on the books at any given time. In the past, our August drop-offs scared the hell out of me, but now at least I know to expect it somewhere in here. That’s why I’m doing the foundational design work for SQL ConstantCare®’s next set of recommendation rules now – so if Erik & Tara aren’t billable for a couple/few weeks in August, they can still be productive working on the rule design & execution.

2:00-3:15 – Design continues. Thinking about how I want to deliver index recommendations, I decide that we need to start including a text file attachment in the SQL ConstantCare® emails. I spend more time in sp_BlitzIndex and Github.

3:15-4:00 – T-SQL R&D teamwork. Erik’s working on a new training class module about parallelism, and he’s got a query that exhibits great demo-friendly behavior. I run a few tests with it on the Stack Overflow database and use sp_BlitzFirst to check wait stats, and we have a fun discussion about CXCONSUMER behavior, and how it might not be that harmless after all. We’re seeing some really odd, unexpected stuff on 2017, and I would love to dig deeper by trying earlier versions. (Later, looking at our Slack channel, I can see that he kept going, and I’m looking forward to seeing what he found.)

At the same time, Tara’s examining the data for her client tomorrow, and she spots a T-SQL anti-pattern in their queries. We all talk about a possible fix in Slack, and she starts working on a demo for their findings.

(At this point, you’ll notice that we’ve done prep work for clients ahead of their engagements – that’s because Richie wrote an app for us that sends us the results of diagnostic queries like sp_Blitz, sp_BlitzIndex, sp_BlitzCache, query plans, etc. That way, when we meet face-to-face with the clients, we’re already well-equipped to discuss the problems they’re facing. Steve asked us to write these blog posts so you could see what tools & techniques we use in our day jobs, and while you can’t get that app, you can get the exact same scripts we use every single day.)

Normally I would end my day here – I try to quit work around 4PM – but today is unusual in that it keeps going:

4:00-5:45 – Drive out to the burbs. I live in downtown Chicago, and traffic going out to the burbs is legendarily bad. It takes me 95 minutes to go 20 miles. There are only 2 reasons I head away from downtown: to go to the airport, or to speak at the…

5:30-8:15 – Chicago Suburban SQL Server user group. I was the first speaker to open the new group, and I present there one last time before moving to San Diego in a couple of weeks. Dinner was pizza at the user group.

8:15-8:45 – Drive home. Atypically long day for me – if it wasn’t for the user group, I’d have been done at 4PM. See you tomorrow!

Read the Comments

WordPress 4.9.6 GDPR Compliance Demo Videos

In Tuesday’s upcoming WordPress 4.9.6, there are a couple of new features to help with GDPR compliance.

First, you can create a privacy policy page:

Second, you can export a user’s data and erase it:

The current schedule is for 4.9.6 to go out this Tuesday, May 15th. If you’d like to take part in the discussion about these features, here are a few useful links:

Read the Comments
Brent Ozar Unlimited, early 2016 at Cliff Lede in Napa

How the Company-Startup Thing Worked Out For Me, Year 6

Blog Posts
15 Comments

It’s time for my annual update on the wild ride. A brief recap of what’s happened so far:

When last we met, finishing up year five, I was excited because we’d finally assembled a profitable product fulfilled by a fun, knowledgable team, sold by a super-savvy salesperson. It was time to hit the gas and really start scaling the consulting business.

Brent Ozar Unlimited, early 2016 at Cliff Lede in Napa

Brent Ozar Unlimited, early 2016 at Cliff Lede in Napa

We hit the gas –
and nothing happened.

It seems obvious now in retrospect, but emergency rooms can’t drum up more business by themselves.

Once you scale up large enough to handle the incoming emergencies, you’re kinda done. Even a great salesperson couldn’t go out and find new emergencies for us to handle, at least not without violating some kind of laws. We realized that our widespread reputation & marketing efforts meant that people already called us when they had expensive emergencies – new leads weren’t the problem.

Oh, sure, we could have lowered our billable rates – we were effectively $300/hour – and started to handle non-emergencies. Our clients would have loved to be able to pay, say, $150/hour and have us tackle larger ongoing projects – just as they’d love to have a trauma surgeon take care of their sore throat.

But we weren’t cheap amateurs.

We had a team of trauma surgeons – top-notch people who got 6 weeks paid vacation per year, 10 federal holidays, health insurance, training, home office funds, time to tackle fun projects, and a swank annual retreat. If we wanted to lower our prices, we would need to lower our benefits, lower our staffing quality, and/or make our staff work longer hours on crappy projects. Those just weren’t interesting options to me. I didn’t wanna build that kind of company. (Nothing against those who do – gotta pay the bills – but I wanted to continue to build the best company to work for.)

Without enough business coming in to keep all the trauma surgeons busy, the consulting employee side of the business started losing about $5k per week. When we hit our normal summer slump, it got worse, and I didn’t see any signs of being able to turn that around. Jessica (our sales pro) and I had tried everything we could think of to bring in more emergency room business, and it just didn’t work.

If I continued to try to make that dream happen, I ran the risk of running the business completely out of money.

I couldn’t care less about me personally running out of money – if the business doesn’t work, I can just throw in the towel, go back to being a full time DBA, and I’ll be fine. But I absolutely, positively couldn’t run the business out of money and run the risk of the employees, Jeremiah, and Kendra not getting paid. I’ve worked for companies that skipped paychecks, and I’ve always sworn I would never let that happen to my own employees.

So in July, it was time to hit the brakes.

I had to lay off 3 of my best friends to save the rest. Letting go of Angie, Doug, and Jessica was tough because they were perfect employees. I couldn’t have asked for better teammates and friends. It’s really hard to tell people, “This has nothing to do with you, and is solely due to my inability to make this new business model work.”

I told the team members, then I posted a blog about what’d happened. Being transparent about it made the whole thing much easier because we got a lot of support from the community. My #1 mission was helping my friends find great jobs fast, and of course they did.

I wanted to make sure the remaining team members knew exactly where we stood as a company, so in team meetings, I shared the P&L statements with ’em. I didn’t want them losing sleep over how the company was doing. We’d taken corrective action early enough that we were still okay financially.

Erik, Richie, and Tara made it clear that they’d do whatever it took to make sure the company stayed on positive ground. They were a huge help getting through some tough times.

I re-calculated my personal & business goals.

The 2016 business was like a stool with 4 legs – 3 of which were losing money:

  • Training – which had always been the strongest leg of the stool, a profitable business that funded everything else (but that meant I was teaching classes without getting paid for it)
  • Consulting – which we’d been trying to grow, but were losing money in the process.
  • Online Services – we hired Richie to build stuff, and I knew that stuff would take a long term investment and vision. This leg of the stool wouldn’t make money until at least 2018-2019, and I was fine with that.
  • Community – no incoming revenue here, just giving back in the form of webcasts, blogging, scripts, and podcasts, with the hope that folks will remember us when they need SQL Server help. This leg of the stool would never be profitable, but spending money & time here was just something that mattered a lot to me.

I needed to reprioritize to get back to 2 profitable legs of the stool (training AND consulting) – still not stable by any means, but a little more doable than 1 profitable leg. I didn’t want to close the consulting altogether – it was perfectly profitable with just 2 trauma surgeons – and I definitely didn’t want to close the services leg. We had enough incoming work to keep Erik & Tara comfortably busy at good billable rates.

(Yes, that means 2 stable legs out of 4: community & online services would still lose money for the near future. There would still be one hell of a lot of risk. That’s life when you start a company – the more reward you want, the more risks you have to be comfortable taking. Either take less risks, or get more comfortable managing risk.)

I did some soul-searching to figure out what I needed to let go of, and what I needed to embrace.

One of my last classes in 2017, San Diego

Training changes:

  • I tried teaching live 3-4-day classes online – I love teaching, but in-person classes are expensive and the travel sucks. I ran experiments to see if I could run the same classes online, and they worked. They worked so well that in Year 7, I gave up in-person classes altogether to spend more time with my wife Erika, and my dog Ernie. (When Ernie was later diagnosed with cancer and given about six months to live, I was so grateful that I’d cleared out my schedule.)
  • I changed our online training to a subscription model – aiming to further tune my passive income. (This didn’t work as well as I’d like, but I spent less time running experiments on that as the live online training did so well, and I focused on that instead.)

Community changes:

  • I open sourced sp_Blitz and friends – I’ve always admired the open source community, and figured it was time to start an MIT-licensed open source project in the SQL Server space.
  • I started GroupBy.org – I wanted to figure out how I could bring an open source mindset to the database training community. Open source works because ideas and execution succeed on their own merits, with less hidden politics. I figured I could make that work in online events, too.
  • I turned in my Microsoft MVP award – there are a very limited number of spots, and a lot of people who want to enjoy the experience. I’d been blessed to have it for several years, had a lot of fun, but I felt that someone else would probably get more value out of it than I was getting. I slipped out the side door quietly.

(Briefly) My Audi RS6

Personal changes:

  • We downsized, moving from a gorgeous downtown penthouse into a much smaller mid-rise, and cut some planned vacations.
  • I sold my beloved Audi RS6 after only a short ownership. That taught me a lot, though: I couldn’t justify a $300/mo parking spot for a car I drove so rarely that the battery was dead whenever I went to go have some fun. I resolved not to buy another fun car until I either retired, or had to start driving back into an office for work again. I really, really miss that car, but I still wouldn’t drive it more than once every couple of months.

Overall, the changes worked. We were smaller, but we were profitable and I had way less stress again.

The business turned around in fall 2016
and I could invest again.

We signed some really fun clients – the public highlight being a series of projects with Google. I was really proud of the work we did, and we kept increasing the quality of our consulting and our open source scripts.

Black Friday 2016

Black Friday 2016

In November 2016, during our Black Friday sales, I made more in one month than I’d ever made in a year as a DBA. That made me feel a lot better, although I still wasn’t able to take all of that money off the table – it kept getting plowed back into building a future leg of the business, services.

Time for a quick history lesson of my scripts/tools/services:

Version 1, late 2000s: My Blitz script was a big long script (think Glenn Berry’s diagnostic scripts) that told you all kinds of things about your SQL Server. The idea was that you’d highlight each query, run it, and then interpret the results.

One of the queries was a database mail test, and the comment said, “Make sure to change the below email address to be yours so you know if email is working.”

You can guess what happened: I got test emails every day.

I could tell by the server names & metadata that database professionals all over the world at big companies and small were all just banging F5 on the entire Blitz script, and hoping that it would somehow tell them what was wrong with their servers. They just wanted a simple answer.

Version 2, 2011: With that lesson learned, I built sp_Blitz: a stored procedure that you could just run, and it would give you a prioritized list of what was wrong with your server. That became massively popular.

Version 3, 2013: Jeremiah built a Windows app that would run sp_Blitz and give you a PDF report. We ended up abandoning it because the support was painful, and we couldn’t get it into the Windows Store easily. But something strange happened here too: we kept getting requests for it, and requests for updates. At least once a month, I got emails from people asking when we would update it. That never left my mind.

I had a vision of something a lot bigger that I wanted to build, but first, I wanted a proof of concept of the architecture I was dying to use.

First, Richie built PasteThePlan.

In Year 6, Richie built & launched PasteThePlan, a page where users could copy/paste in execution plans, get a shareable link, and post those plans on Q&A sites. PasteThePlan represented the first publicly visible fruit of Richie’s labor.

PasteThePlan let us dip our toes into serverless application design. I wanted to build something in serverless that didn’t represent a mission-critical app: if it went down, or if somebody lost some of their data, life would go on. We used AWS Lambda because it was the most complete product when we started work in mid-2016.

By the end of 2016, I was a huge, huge fan of serverless. @Cloud_Opinion sometimes writes parody, but their Feb 2017 article on the upcoming SaaSocalypse rang really true for me. I believed that with the right product, the right architect (and Richie definitely was), and easy scalability in the form of serverless design, I could start a really meaningful application with a very low capital cost of entry.

Next, he started work on SQL ConstantCare®.

There’s a lot of people out there that don’t really want a monitoring tool. Monitoring tools have dashboards that devolve into a constant @Swear_Trek alarm, and continuously send out non-actionable alert emails. PAGE LIFE EXPECTANCY IS LOW! DISK QUEUE LENGTH IS HIGH!

A lot – not all, but a lot – of admins don’t want dials and charts. They just want to know what to do.

So I wanted to build something that:

  • Polled your SQL Servers and gathered data you were comfortable sharing
  • Sent that data up to our servers in the cloud
  • Let us run analytics to generate recommendations – no artificial intelligence, just plain ol’ real intelligence
  • Later, sent you just one email with a prioritized list of actions to take (and if you don’t need to do anything urgent, tell you that, and let you get on with your job)

Richie busted his hump on that for the second half of Year 6. It didn’t end up launching for quite a while, but you can hop over to my SQL ConstantCare® posts on BrentOzar.com to see how that launch is going.

Alaska with Dad, Aug 2016

Year 6 was a big turning point for me personally.

I like writing these posts with a good year of distance between me and what happened. It helps me look back with a lot more perspective and think about what were the most important moments.

The biggest moment by far was coming to the realization that I personally didn’t have the talent to build the 10-15-employee consulting division that I wanted to build.

At the moment it hit me, I put at the top of my to-do list, “I am overcommitted and under-equipped.” It’s still there right now:

What I see when I try to add a new task for myself

That’s proven to be the most valuable realization of year 6, if not my entire adult life. I still want to accomplish a bajillion things, but I’m no Elon Musk, nor do I have that level of work ethic. I love taking several vacations a year with my friends and family. I have to balance the tasks I add against the time I have. Focus means saying no.

<sigh>

I don’t like saying no. I like saying sure, I can do that, why yes, I’d love to tackle that problem, indeed, yes, that looks like a possible mission.

It’s a lesson I keep learning, and it’s timed perfectly with the emptying-out of my blog post queue here. I started this site to say things that didn’t feel like a good fit at the company blog, and for the last several years, I’ve published a post here every week.

This week marks the end of that streak.

I still have a ton of things I love writing about, but I’m juggling my focus a little. On my last personal career-planning retreat, I realized that while I love doing this, I need to take some of this time and allocate it to something else. I’ll still do my annual updates in this series, and I’ll keep my Epic Life Quest up to date. I’m just going to try to discipline myself into not writing here for the rest of 2018, and rebalance that time elsewhere.

Read the Comments

Engagement Startup Costs: You Don’t Hire Consultants to Load Dishes

Blog Posts
8 Comments

“Whaddya mean you won’t do ___?”

It’s a question I get every now and then from a prospective client. Most folks see our marketing, see that our consulting services page only has one thing on it, and understand that we’re very specialized.

But every now and then, someone contacts us after seeing our pages in their Google results over and over again, and they figure we’re up for anything. They say something along the lines of, “I just need you to look at this one query and fix it.”

And sometimes – not often, but usually when the prospect has been struggling to find someone willing to take the gig – the conversation becomes a little insulting, along the lines of:

“Whaddya mean you won’t fix one query for me? I thought you guys were experts. Are you telling me you don’t know how to do something that basic?”

Here’s the deal.

Our dishwasher

I load my own dishwasher. Erika does the cooking (when we’re not at a restaurant), and I load the dishwasher. I somehow find it relaxing – I’d rather load the dishwasher right now than walk past the sink and see dirty dishes in it. I like the task. It’s kinda zen-like, just me and the dishes.

But no, you can’t pay me to load yours. You can’t call me over to your house to load the dishwasher. Even if you happened to live near me, I’m not putting on shoes, a jacket, and dealing with the security of getting into your house – all just for a 5-minute task.

The startup costs of that engagement are just too high.

Now, if I happened to already be at your house – like if we were hanging out after you threw an excellent dinner party – then sure, you’d find me loading the dishwasher. Payment would be simply out of the question – it’d just be something you’d find me doing absentmindedly.

If you have what you think is a small task, then post it.

If all you need is a very small task that’s self-contained and well-defined, go to:

I help out at all three sites too – because there, the startup costs are extremely low. I can jump in there between calls, whenever I see something that looks interesting, and help out in seconds. When the questioner has put the right work in to define their question, I find it peaceful, zen-like, transferring knowledge from one person to another like moving dishes from the sink to the dishwasher.

But often, in the process of trying to write a clear, simple, well-defined question, the asker discovers that it’s not as simple as they thought. They start saying things like, “Well, to explain it, I need to talk about a few other things, bring in context, and get you security access, and must be kept private,” then that’s not really a simple question, is it?

That’s where consulting engagements come in. Those requirements have a startup cost, and it’s not 15 minutes and $50.

Read the Comments

Databases Eight Years from Today, 2018 Edition #TSQL2sday

Blog Posts
11 Comments

Almost exactly five years ago today, back in March of 2013, I wrote a post called Databases Five Years from Today. In it, I predicted:

  • You’d still be supporting 2005 and 2008 – while the number of 7% of servers are still 2005 and 2008 might seem small, it means that on average, every shop with 14 servers still has one of these boat anchors. (The folks at Quest recently told me the number of 2000 servers is still big too, but their monitoring app just stopped supporting 2000, so that was that.) I give myself a point on that.
  • There’d be no widespread adoption of Hekaton, columnstore indexes, or AGs – and I’d concur here. I don’t have solid numbers to back these up, but I bet less than 1 in 5 production servers has any of these 3 features. I give myself a point here.
  • We wouldn’t see massive widespread migrations to Azure SQL DB – but I was careful to point out that I was only talking about existing apps, whereas new development would likely start in non-SQL-Server places. I know Azure SQL DB gets a lot of press, but as of March 2018, the prediction date, the widespread migrations aren’t happening. The painful implementation of cross-database queries alone made it a non-starter. I give myself a point here too.

3 for 3, that’s pretty good – especially given how far out on the lonely island I felt when making those predictions. But you know what’s interesting? Fast forward just one or two more years, and my accuracy would probably drop by a lot. Predicting more than five years out is really hard. Surely nobody could get that right.

The 100th T-SQL Tuesday says, “Predict 100 months out.”

Adam Machanic asked us to look out 100 months, or about 8 years. To give you some idea, here’s a sampling of my blog posts from 8 years ago:

  • MCM Prep Week: Interview with Joe Sack – I was just starting to go through the MCM program, and Joe was running it. Today, I run a consulting company, and Joe designs adaptive query processing, and we’ve also both changed employers at least twice since that post.
  • SQL Azure FAQ – the product was just launching. Since the post, it’s gone through more name changes than Joe & I have had jobs. It grew in ways you would expect (max database size is up to 4TB), and had some odd head fakes along the way (remember Federations?)
  • SQL Server Magazine Bankruptcy – I’m old enough to remember a time when this was a physical print magazine that you could hold in your hands.

So with that in mind, some of these predictions are going to seem a little wacko, but I’m aiming way far out. Let’s go from safest to riskiest bets.

In 2023, DBAs will still be a thing.

The safest bet in this post – this seems so incredibly obvious to me, but there’s still “thought leaders” out there saying the opposite, so I have to put this down in writing.

There’s a chance we’ll be called Database Reliability Engineers, but the core parts of the job – designing, building, securing, and scaling data storage – are still going to be a lucrative career in 2023. In fact, it’s going to be even bigger because…

The data safety business will look like the car safety business.

Car manufacturers struggle with safety regulations: every country insists on having their own slightly different standards. For example, over in Europe, you can get adaptive headlights that automatically point out things you should be seeing, and dim specific areas of the light so oncoming drivers aren’t blinded.

Last minute add-on to the deployment

In the US? Nope – a 1968 law prohibits them.

Turn signals in the US: amber. In the EU: clear. Fog lights in the back of the car? Required in the EU, never seen in the US.

But car makers have to build vehicles that are sold everywhere, and countries refuse to agree on what’s “safe.” Manufacturers end up with fleets of lawyers, designers, and engineers to build all kinds of differences to meet where a particular car is sold.

We don’t have that luxury in the database business: the same web site code base has to service customers everywhere, and meet different regulations based not on where the customer is surfing from – but a combination of their location and what country issued their passport.

This is gonna suck, bad, and governments simply aren’t going to suddenly start cooperating on a single data standard that works for everyone. That’s not how governments work.

Update 2018/09/23: California passed their own act, which doesn’t line up with the GDPR. There’s no United States standard yet.

<= 5% of SQL Servers will run on Linux.

(To be clear, I’m talking about Windows as their primary OS. Yes, a lot of SQL Servers run in virtual environments inside VMware, so it’s SQL-in-Windows-in-Linux, but that doesn’t count as running SQL Server on Linux. I’m also not talking about Azure SQL DB – I wouldn’t be surprised at all if Microsoft switched that over to Linux in the 8-year time span.)

Where’d I get the number from? Well, today in 2018, 72% of installs are from the last 8 years, SQL 2012/2014/2016/2017. That means 8 years from now, maybe 72% of installs will be SQL Server 2017 or newer – and 2017 is required for Linux support. Realistically, the only Linux install range would be 0-72%.

Even if 1 in 10 new SQL Server 2017s were installed on Linux, that’d still only be 7% of the install base. This prediction is way safer than it looks. (I almost said 1%, but I think there’s a decent chance that truly large shops – like shops with over 1,000 instances – will use Linux, and even small adoptions there have a big difference in the numbers.)

Your developers will have several projects built with serverless architectures.

Right now, when I talk to data professionals about serverless architecture, I can almost hear them tuning out. I understand – until you’ve used it, it just seems so farfetched. But going from our experience on PasteThePlan and SQL ConstantCare®, it’s utterly phenomenal.

But serverless is going to mean way more to you in 8 years than Docker containers, SQL Server on Linux, graph databases, or Hadoop. Your developers are going to be all about building apps in function-as-a-service platforms, and they’re going to wonder why databases are so far behind.

Your default new database will be in a cloud PaaS.

Five years ago, when someone asked for a new SQL Server database, you might have created it either on a shared physical server, or a shared (or rarely dedicated) VM.

Today, you probably default to creating a new database on an existing virtual machine. Most of those virtual machines live on-premises, but there’s a significant percentage that live as VMs in Azure VMs, Amazon EC2, and Google Compute Engine. You wouldn’t dream of deploying a new physical server by default.

Today, you likely wouldn’t respond with, “Sure, I’ve created you an Azure SQL DB. Here’s how you connect.” For SQL Server, your only two PaaS options today are Microsoft Azure SQL DB and Amazon RDS SQL Server. Microsoft’s on the cusp of releasing a 3rd option, Azure SQL DB Managed Instances. Their marketing site says it’s in public preview, but it’s not – you can sign up, but they don’t have enough staff to support new users.

By 2026, I think the next shift will already be over and done – just as we switched from physical boxes to VMs, we’re going to shift – but not to VMs in the cloud. In 2026, I bet your default new database will be in a Platform-as-a-Service option. It might be Azure SQL DB Managed Instances, or something else entirely.

Which brings me to the next prediction…

2 big clouds will offer an MSSQL-compatible serverless database.

Amazon’s got a head start on this in preview now: Amazon Aurora Serverless is an on-demand, auto-on, auto-scale, auto-off database server with MySQL compatibility. You don’t pay for instances, availability zones, or regions – you just pay for the queries you run, per hour that you’re making database queries.

If you haven’t seen Aurora yet, the intro video does a great job of explaining why businesses hate databases:

I’m predicting a couple of very big leaps here:

  • Google and/or Microsoft are going to follow suit on Aurora Serverless’s pricing, and
  • Amazon and/or Google are going to offer their own Platform as a Service implementation of Microsoft SQL Server (like Azure SQL DB, but different) – or Microsoft is going to license it to them, or who knows, even open source some part of MSSQL that enables this to happen

Either of these bets is risky on the 8-year horizon, but I’m going out on a leap and making both. I’m going to hedge my bets a little though:

  • They may not be compatible with the latest version of SQL Server – for example, if it came out today, I think it’d get serious adoption even with just SQL 2012 compatibility. (That’d be 61% of the market, remember, and old apps are often on autopilot with low performance requirements, great fit for a serverless database.)
  • They may not get any quick adoption – it takes years for a service like this to catch on. (Azure SQL DB is a great example – it’s 8 years old now.)

Update 2019/05/06: Azure SQL DB Serverless is out.

Microsoft will fix the “String or binary data would be truncated” error.

This is the riskiest prediction out of all of ’em.

Oh I know what you’re thinking: it’s been the top-voted user request for over a decade, and as soon as Microsoft dumped Connect and switched to a new user feedback system, this request immediately bulleted to the top of the list again, getting almost 3x more votes than the #2 issue!

And yes, someone from Microsoft recently commented on it:

Much better than passive-aggressively looking at it

All I can say is:

Update 2019/03/20: this error is fixed in SQL Server 2016 SP2 and SQL Server 2017 with trace flag 460.

Read the Comments
Menu