In this article, I’m going to delve into the topics of MAXDOP and cost threshold for parallelism. 

I’m writing it for two reasons:

1. There was a recent thread on SQL Server settings on the NAVUG list. Some of the information was good. But much of it was expressed in terms of “Well, I heard…,” which makes me nervous. You should have a basic understanding of what various settings do before you start taking anyone’s advice. (This isn’t to make me look superior. I’ve had plenty of times in my life where I (or my team) have been caught in the “let’s give this setting a try” loop — and it’s not pretty.)

2. I’m preparing a presentation on SQL performance for NAV. While I’ll be focusing mostly on improving BI query performance (which is where I spend a lot of my time), I do get asked about server settings, so I think it’s a worthwhile topic.

In writing this post, I’m standing on the shoulders of, if not giants, then many SQL Server MVPs. 

I can only justify “one more post” because I know that the folks who are responsible for SQL Server are often responsible for many other things. So, if I can provide an explanation that helps these folks (and helps me when I need to explain it again), it’s a useful exercise. 

What is MAXDOP?

MAXDOP stands for Maximum Degree of Parallelism. 

In short, it tells SQL Server how many cores of your processors any given task can use.

Let me explain it this way: Once upon a time, every computer had one CPU. As a result, it could only run one process at a time and every other process had to wait its turn.

Today, modern computers can perform multiple calculations at the same time. Even my somewhat decent laptop has multiple CPUs/cores.*

SQL Server is no exception. Certain kinds of work performed by SQL Server can benefit by being split into parts and run simultaneously or in parallel and then reassembled at the end for the user. 

MAXDOP controls how this works. It tells SQL Server whether it is “allowed” to split tasks into different pieces, use multiple cores at the same time and then reassemble the results.   

If you set MAXDOP to 0, you’re telling SQL Server that it can do whatever it wants. If a query comes along and SQL thinks it would benefit from using all 32 cores on a system, then that’s its prerogative. This is both the default setting and a bad idea.

You can imagine why that setting might not be a good idea. What if all your users go to lunch and one guy sneaks in with a query from hell? A query that reads the entire database over and over again?

And because you set MAXDOP to 0, SQL Server decides, “Sure, I’m not doing anything right now, go ahead — have all my cores!”  

When your users return from lunch, SQL Server doesn’t have anything left for anything else. 

I’ve exaggerated this case to make my point. Like ice cream, you can have too much of a good thing. Adding lots of cores to a given query can actually slow things down. Which is kind of like real life. As a colleague of mine once send, “If I get any more help, I’m not going to get anything done”.

In contrast, if you set MAXDOP to 1, you’re telling SQL Server that you don’t care how much a query could benefit from multiple cores. SQL will use one core, and only one core, and be happy with it. You get what you get and you don’t get upset.  

MAXDOP and NAV

As I’ve already discussed, the default setting of 0 is generally bad. (Indeed, Microsoft’s own advice tells us not to leave it set at 0.) 

For NAV, you want to set it to 1 – meaning that you aren’t going to let any query use more than one core, which again follows Microsoft’s recommendation

Now why would Microsoft make that recommendation?

It has to do with the nature of the NAV workload. In general, NAV makes lots of small requests to the database — whether it’s a user entering data one piece at a time or updating a set of posted ledger entries. In both of these circumstances, SQL simply doesn’t benefit from parallelism. 

Keep in mind that what’s happening on the screen in NAV doesn’t exactly correspond to how NAV is querying SQL Server. No matter what it looks like, NAV still issues many, many small transactions. That’s just the nature of an OLTP system. And because the system can’t split these transactions into multiple pieces effectively, it’s better not to try.  

Cost Threshold for Parallelism

But what if you set MAXDOP to something other than 1?

How does the engine decide what kinds of queries are eligible for parallelism?

I mean, if you just wanted to look up a vendor address by vendor number, SQL Server shouldn’t even think about going parallel. But when does it reach a threshold?

Here’s how it works: 

Every query in SQL Server is assigned a cost. The cost is the engine’s guess at the amount of effort a query will take to execute. This is just a number — and the bigger the number, the harder the query is expected to be on the system. The tipping point from NOT using parallelism to USING parallelism is the cost threshold. 

I like to think of cost threshold as SQL Server’s tolerance for queries. Kind of like my tolerance for certain kinds of complaining users.

If you set the cost threshold low, it takes very little for the engine to start splitting queries into pieces to run in parallel. If the query would benefit even a little bit from running in parallel, then it will start using an extra core. 

If you set the cost threshold high, the query would have to really, really benefit before it gets another core.

Let’s illustrate with a quick example. Say I’ve set my MAXDOP to 4 and my cost threshold to 10. (Generally, you’d go higher, but I didn’t want to spend a lot of time building queries to prove the point of this example.)

We start with the super simple query of selecting a vendor from the vendor table. 

We look at the execution plan and see that the cost is negligible because the degree of parallelism is 1. 

With a more complex query, the cost goes up and the degree of parallelism also goes up because we’ve exceeded the cost threshold:

If I change my cost threshold to something more reasonable, such as 40, and run the same query, parallelism is not triggered:

Again, you can think of cost threshold as a tolerance for complaints. 

In some offices, as soon as someone complains they get attention. In others, no one pays any attention until someone’s in real pain.

It all depends on the office’s threshold for complaints. 

Administrative Jobs – DBCC_CHECKDB, Rebuilding Statistics 

With every rule, there’s an exception — even for NAV. 

If you’re not running 7/24 and have an overnight or weekend maintenance window when you can run administrative jobs, these can benefit from using multiple cores. (DBCC CHECKDB to ensure database integrity is a classic example.)  

And here’s the good news: you don’t have to change server parameters to do it. You can specify MAXDOP within the command itself:

DBCC CHECKDB (DynamicsNAV90) WITH MAXDOP=4

Check out this Microsoft post for more info. Setting MAXDOP to a larger number can really help your job complete faster. 

Final Takeaways

In summary: 

  1. Don’t accept the defaults that come with SQL Server installation.
  2. The NAV workload is different.
  3. If your administrative jobs would benefit from running on multiple cores, you can fix those without changing the overall server setting.

* In my first job, in addition to programming, I had to fix hardware. I’ve hated hardware configuration ever since. So I’m not going to go into a deep discussion of CPUs, cores or NUMAs. I just need to know enough to make my queries run faster.

Get tips and insights delivered to your inbox

Start a conversation with us

Call: 917-848-7284
Email: inquiries@redthree.com

Request a Consult