Be careful about Takeaways

Late last week, a coworker sent me a blog post on Clustered Indexes and Heaps; titled: Unreasonable Defaults: Primary Key as the Clustering Key.  It’s a blog post that, well.. go ahead and read it, and come back. I’ll wait.

What did you take away from that blog post?  What really jumped out at you?

For the first three reads (yes, I read it more than three times), here’s what jumped out:

  • Primary Keys are bad at being clustering keys because the data that we need is hardly ever sequential by ID (I assume doubly so if the primary key is a GUID).
  • It’s better to make the clustering key something that actually puts the data that would most likely be accessed together on the same page.
  • Heaps are better than having a Clustered Index if you don’t have a covering (Non Clustered) index.

Since I went to the crash course on SQL Server Performance Troubleshooting last May (H/T to the Brent Ozar Unlimited folks), I know a lot more than I did before about SQL Server (I didn’t know very much before, and the 3 day intensive class traded money for time — really worth it), so I’ve got an understanding of what is this blog post is really about — but it clashed with what I know about SQL Server.

The post didn’t really spell out those things that it probably should have, and that worried me:

  • A covering Non Clustered Index is usually the best approach to get data quickly.  The smaller this index (e.g., less columns in index, or smaller data types in the index), the better performance you’ll see.  If you’re retrieving lots of columns with large or unknown sizes (nvarchar(max)) and you’re filtering by your Clustering Key alone, a Clustered Index is going to be fast.
  • Heaps are bad, MMkay?  Well, that’s not entirely accurate.  They’re great when you just need data any old place and don’t really care about retrieval in an orderly (or fast) fashion.
  • SELECT * is bad because it doesn’t allow you to use your beautiful non-clustered index alone; it forces use of the Clustered Index (if one exists) and does not allow for the fastest possible data retrieval (Unless you really need everything, in which case, does your App look like this?)

It mentioned these things, to be sure (well, the first and third bullet), but it didn’t make them the ‘bold face type’ it should have; and didn’t explain it in a way that a non DBA would get the same understanding from it as someone who spends their days tuning SQL.

The takeaway is this: Whenever you’re reading a blog post on performance, context matters.  Authors will assume knowledge you may or may not have.  They will assume context that may not fit your business needs.  The most important takeaway you can have is never move on the basis of a demo alone.

Note: the Brent Ozar Unlimited team is giving the SQL Server Performance Troubleshooting class again this year. I highly recommend it. It doesn’t matter if you’re a fifth year DBA or a Developer trying to level up your SQL Server game — it’s more than worth your time. If you have a conference budget this year, forget a conference — pick a training class and do it.

Full Disclosure: Brent Ozar took a look over this post to make sure I wasn’t talking out of my ass. I thank him for that. I endorsed their class before he agreed to fact-check this post for me.

My Epic Life Quest Goals for 2015

Brent Ozar has an Epic Life Quest.  He’s on Level 5.

I’ve spent most of my professional career wandering aimlessly; working on whatever excited me, going from point A to point B, but never actually planning out my path. I’ve always been a “Yea,-that-sounds-good-and-go” sort of developer.

I’m not at a point where I can do that anymore.  I could coast if I wanted to — but I don’t. I want to double down and be the best I can be.

They say that up until 30, you make your habits; after 30, your habits make you.  I’m challenging that.

To that end, here are my Level 1 Goals:

  • Finally launch Stack Stats (seriously, I mean it this time)
  • Give a User Group Talk on SQL Server Disaster Recovery in AWS
  • Give a User Group Talk on ASP.NET vNext
  • Give 3+ Dev talks at work
  • Complete and Launch my first IoT project
  • Blog (at least) once a Week for 52 weeks
  • Get back down to 160 Lbs (179, currently) and stay there.

Challenge Accepted.

Gotchas with Windows Services

In trying to deploy a Windows Service using MSBuild, I learned a few neat things about them.

You can stop, start, create, or delete a Windows Service through the administrator prompt command line:


sc create "AWindowsService" binpath= "C:\Weee\Service.exe"
sc delete "AWindowsService
sc start "AWindowsService"
sc stop "AWindowsService"

You can also do this for another server in your domain, by providing its UNC path:


sc \\ServerName create "AWindowsService" binpath= "C:\Weee\Service.exe"
sc \\ServerName delete "AWindowsService
sc \\ServerName start "AWindowsService"
sc \\ServerName stop "AWindowsService"

When creating a Windows Service, you can decide how it should start up.

Common options:

sc config AWindowsService start= delayed-auto
sc config AWindowsService start= auto
sc config AWindowsService start= demand

Much like other command prompt commands, the name of the service needs to be in quotes if it has spaces in it:


sc start "A Windows Service"
sc start AWindowsService

The spaces for the options are required:


sc \\ServerName create "AWindowsService" binpath= "C:\Weee\Service.exe"
^ Required
sc config AWindowsService start= demand
^ Also Required

If you’re trying to delete a service, you can’t have services.msc running, nor can the service be open, and you can’t have Process Explorer running either (h/t to “StingyJack” in this Stack Overflow answer).

If you create a service with the wrong bin path, it won’t be able to start; you’ll have to modify the bin path or delete and re-create the service.

Other Resources:

The Time I Tried to Deploy a Windows Service using TFS and MSBuild

Over the course of the past week(!), I’ve been attempting to deploy and install a Windows Service using MSBuild (Specifically as driven by TFS, but it’s all MSBuild under the covers). This blog post details what I’ve tried, what didn’t work, and what the problem is (and how I solved it). It also details all of my research on the topic.

If you can see something I’m missing, please leave a comment. For the rest of you, consider this guilt free schadenfreude.

Here are the constraints:

  • Deploy a Windows Service to a Windows Server joined to the Domain (Same domain as the Build Server)
  • Ensure that the service is created (if it doesn’t exist), and started after the deployment process is complete.
  • Able to deploy through TFS using MSBuild
  • The Windows Service is kept in a standard directory on the target server (for our purposes, let’s say it’s E:\Services, or UNC Path: \\DeployedServer01\E$\Services
  • Do not change the existing code for the Windows Service (e.g., put it in an installer, or make it self installing).

The problem I’m running into is “destination directory is null or cannot be accessed.”

Here’s what I’ve done so far.

I’ve created the MSBuild project (called deploy.proj) as lined out in this blog post.

I also have the .bat files necessary to create, delete, stop, and start the service that are listed in this gist.

Aside: Neat Things I learned about Windows Services

I used the following MSBuild command to run the build (actually, I ran it through TFS, but if I were running it from the command line, it’d look like this):

msbuild deploy.proj /p:DeploymentServer=”DeployedServer01″ /p:DeploymentServerFolder=”\\DeployedServer01\E$\Services”
Unfortunately, this did not work. The files simply would not copy to the output folder. My next step was to determine why; to do so I needed to know who MSBuild was building the project as.

To debug this issue, I found this neat set of debugging targets for MSBuild:

<Target Name="DebuggingTFSBuild">
    <Message Text=" MSBuildProjectDirectory  = $(MSBuildProjectDirectory)" />   
    <Message Text=" MSBuildProjectFile  = $(MSBuildProjectFile)" />     
    <Message Text=" MSBuildProjectExtension  = $(MSBuildProjectExtension)" />   
    <Message Text=" MSBuildProjectFullPath  = $(MSBuildProjectFullPath)" />     
    <Message Text=" MSBuildProjectName  = $(MSBuildProjectName)" />     
    <Message Text=" MSBuildBinPath  = $(MSBuildBinPath)" />     
    <Message Text=" MSBuildProjectDefaultTargets  = $(MSBuildProjectDefaultTargets)" />     
    <Message Text=" MSBuildExtensionsPath  = $(MSBuildExtensionsPath)" />   
    <Message Text=" MSBuildStartupDirectory  = $(MSBuildStartupDirectory)" />
     <Message Text="  " />
    <Message Text="  " />
    <Message Text=" Environment (SET) Variables*       " />
    <Message Text=" ---------------------------        " />
    <Message Text=" COMPUTERNAME = *$(COMPUTERNAME)*   " />
    <Message Text=" USERDNSDOMAIN = *$(USERDNSDOMAIN)* " />
    <Message Text=" USERDOMAIN = *$(USERDOMAIN)*       " />
    <Message Text=" USERNAME = *$(USERNAME)*           " />
</Target>

I invoked them using:

msbuild deploy.proj /DebuggingTFSBuild

From this, I found out that a Domain user account had been set up to run the build. I then gave that user “Full Control” permissions on the share.

Running the build again, it again failed.

So I did what anyone would do, I escalated. I then added that account to the “Local Administrators” group for that computer. Re-ran the build, and it failed.

I spent some more time trying to debug this solution; specifically trying to use RoboCopy and XCopy in conjunction with this solution; both failed (permissions issues).

To be clear; here’s where I was at at this stage:

  • Was able to successfully Delete, Create, Stop, or Start the service during the course of the build (as outlined in the above solution)
  • Was able to have TFS ‘drop’ the executable and related files to the target server (a drop location in the same share, but different directory)
  • was not able to copy the ‘drop’ folder’s files to the static service location

On to solution #2 (or Solution #4, depending on the permutations) – Copy files to Fixed Location using TFS.

Of course, this didn’t work either. I received the error “TF270001: Failed to copy. The destination directory is null or cannot be accessed.” Have you ever tried a Google search for this issue?

Of course, TFS doesn’t provide any easy way to check the directories, so I had to output the values of the variables into the MSBuild log (this can also be done through an Activity Message in the TFS Template builder).

After double (and triple, and quadruple) checking the source and destination directories, I scrapped this approach, and followed the advice in this Stack Overflow post. At this point, I just wanted to get it to work, enter Solution #5: Hack it.

I used pieces of the original solution; but instead of relying on the agent’s account, I created a Domain Account that had permissions; and then created naked <Exec Command=””/>  to allow me to run the commands needed to copy the files:

<Exec Command="IF NOT EXIST P: net use P: \\DeployedServer01\E$\Services /user:domain\deployUser passw0rd" />
<Exec Command="xcopy &quot;$(OutDir)*.*&quot; P:\* /y" />
<Exec Command="net use P: /delete /y" />

This solution works: It bypasses whatever Windows/Domain authentication issue the build agent user was having, and it ensures the files are copied to the fixed location.

Honorable mentions:

  • MSDeploy: I attempted a solution using MSDeploy, but some of the parameters didn’t make sense, so I didn’t spend much time on it (notably the IISDeployPath — why would I want such a parameter for a Windows Service?)
  • Using mklink to symlink the drop location to the static path. This ran into UNC Path issues that were a pain in the ass (and ultimately unresolvable).
  • Using InvokeProcess to try to CopyDirectory in TFS Build.

In the “Things that should be Apparent but aren’t” category:

  • Why does TFS make it so hard to get the “Drop Location” directory? $(OutDir) refers to the Build server’s build folder, *not* the drop location. Since we are on TFS 2012 (and not 2013), we can’t use any of the ‘nifty’ variables that are in 2013.
  • Why isn’t dropping to a fixed location considered a standard deployment task?
  • Why does the MSbuild Copy Task have poor debugging? Even in Detailed verbosity, there was no information that was useful from that task.
  • Windows Workflow and TFS Build Templates: Truly “Enterprise-y”
  • Why is it so hard to debug authentication issues on TFS? Legitimately, once the user had permissions to those folders, it should have worked. I can only wonder if the user TFS said it was running as was not in fact the user it was running as (although that seems weird, since the build agent had permissions to the drop folder).

Miscellaneous notes on MSBuild and TFS Build:

All in all, I spent 5 days, 70 commits, and 70+ builds to get this to work. I still don’t know why the ‘recommended’ solutions failed; or how to get MSDeploy to play nice with MSBuild and TFS for a Windows Service, but what I do know is this: It shouldn’t take this much effort to be able to deploy a Windows Service. It should be an ‘out of the box’ feature for TFS; along side more useful debugging when it doesn’t work.

Your Constraints Define your Success

Recruiting good developers is hard.  In my own experience hiring developers, One of the things that was the hardest thing for me to learn is that who I worked for and our toolset defined what cross section of developers would apply. I ended up rejecting all the candidates that came across my desk because I didn’t understand that fundamental rule:

Your constraints define who you can recruit.

In my particular case, I worked for a government contractor that worked on a Winforms project for a large government agency.  Our build system was TFS.  The best thing we had going for us was that we were working in C#.  A secret clearance (or the ability to get a secret clearance) was necessary just to step in the door; and that further limited the programmers we were able to hire.

Looking back, it’s really hard to believe how naive I was about this.  Think about today, if someone wanted to hire you for the following:

  • Large Government Contractor
  • A Winforms Project that uses .NET remoting
  • Version Control and build system is TFVC and TFS
  • Bug tracking system is some enterprise software you’ve never heard of
  • You need a secret clearance
  • The project itself is uninteresting
  • 3 month release cycle

Would you take the job?

All other constraints (benefits, pay) being equal, you probably wouldn’t.  In fact, let’s say you had to choose between this job at 120K, and the following job at 87K?

  • Small, private company that has worldwide brand recognition and a has purpose driven mission.
  • Working on many different projects across languages and frameworks
  • Whatever tools you need to get your job done
  • A polyglot platform shop
  • Git for version control, Octopus deploy / Fabric for releases
  • You pick your developer set-up
  • Agile; continuous delivery

For me, at least, that jump is worth it.  Being well paid but working in a dead-end technology and not having the ability to learn new things just doesn’t compete.

More-so than ever before, our developer tools are evolving at a rapid clip. Just a few years ago git was still used only by the bleeding edge of developers; and now it’s being used by multiple teams at Microsoft.  Microsoft even included support for Git in early 2013 for TFS.

The quality of developer you’re able to recruit is inversely proportional to your constraints. If your environment exudes an atmosphere of “We can’t change”, then you won’t attract good developers; at best you’ll attract developers who are comfortable with your toolset (unnecessarily limiting the talent pool).  The lower the constraints on what your developers are allowed to use, the broader your talent pool.

Today, Webforms is already out of mainstream popularity, .NET 3.5 is falling to .NET 4.5, and TFS  even loses to Github for teams that work at the company that made TFS!  If you’re still targeting your team to what was popular in 2011, you’re going to miss what’s next.

If you’re the manager of developers, you may not stay up to date with the latest trends. That’s ok, because developers are pretty ornery creatures. We’ll tell you when we think something is broken.  If your developers are complaining about your tool chain, that’s a sign that the tools don’t meet their needs.  Instead of doubling down (because of sunk costs), ask them why, and be prepared for the answer. It may not be the cheap answer, but it is what they believe.  If you give your developers the latitude to fix problems in code, why not give them the latitude to fix problems in their tools?

Your Constraints define who you retain.

I’m not saying you should chase technologies, for instance the JavaScript framework de jour, but you should keep an eye on trends; and where the broader programmer community goes, you should as well.

Staying current with tech trends isn’t just about recruiting and retaining your developers, it’s also about making sure they’re staying up to date with their peers.  As the story goes, a CFO asks the CEO, “What happens if we invest time and money to train our people and they decide to leave?” To which the CEO responds:  “What if we don’t train our people, and they decide to stay?”

There are some things you just can’t change: If you work for a large corporation or government contractor, you can’t change that.  What you can change is your internal developer culture. Keep it fresh.  If you’re still using Perforce or TFS and your developers are complaining, consider moving to Git. Both TFS and Perforce support git now (I wonder why that is?).  If you hear complaints about your work item tracking system, consider moving to something less painful, like Trello.

The more core a technology is to your system, the harder it is to change.  The great thing about your toolchain is that unless you have a pretty messed up work flow, they’re not core to your business. They’re tools. They can change (and they should be able to change pretty easily).

Your constraints define how well you can compete.

If your business isn’t willing to change to embrace new technologies, your competitors may be willing to. While you’re dealing with branching and merging issues with Perforce or TFS, your competitors are easily able to deploy new features and handle complexity.  Every moment lost because your toolchain doesn’t meet your needs is another moment you lose to your competition.

If you want to recruit the best developers, keep your best developers, and compete in the marketplace, remember, your constraints define your success.