Some of you may have read how I feel about smartphones. For the past few months, I went back and have been using my old Sony K850i which on one level has been great. I went through my stash of phones and that 2007 model got the best reception – you know, like they remembered it was a phone. I then remembered why sometimes a feature phone – or at least an older one – can be annoying. You start carrying around additional chargers for proprietary ports (imagine yay in 1pt font) and the battery life on 3G was/is terrible. Standby is great, but if you talk, you’re lucky to get 4ish hours. For the past few months I’ve been honestly looking for a good, normal phone with great reception. Sadly, nothing – and I literally mean that – has fit the bill.
Regardless of form factor, I just want a good phone. The old Xperia X1 was a good phone even though it ran Windows Mobile. In the past, I had two bad Nokia devices and swore off them but I heard so many good things about the Lumia 620 and it’s in the size I kinda like as a phone, so I figured I’d give it a whirl. Its selling point to me: loud speakerphone. It’s true. The 620 arrived this afternoon and a few hours in, I want to smash it against the wall. Why?
Annoying Factor One: No Battery Indicator Unless You Are On the Lock Screen
So you’re probably saying, ”But Allan, if you had done your homework on WP8, you would have known that.” Fair point. But it’s asinine. Every mobile OS at a minimum shows the battery icon at the top all the time by default. I know iOS does it, and all of my Android devices have. This piece of junk? Nope. There’s an app for that which you can download to put on your Start screen to show you. I”ll get to the app process in a bit.
How can Microsoft ship a mobile OS with such a core fundamental piece “missing”? Well, technically It’s there … just not when you want it. Jorge Segarra (blog | Twitter) pointed out to me that you can swipe down to make the battery indicator appear but it disappears. How hard would it be to make an option to display it 24×7? Not very I would think. Fail.
I think my old Motorola StarTac just called and laughed at the Lumia 620.
Annoying Factor Two: Changing Your Regional Info
The unlocked Lumia 620 I got was from Hong Kong apparently. This is not a big deal. On Android if I want to change language settings, you just do it and it takes. The punchline on WP8? You need to reboot for it to take effect if you want to go from Hong Kong to US English for how you display date, time, etc. Unbelievable. What year is this?
Annoying Factor Three: Apps, Accounts, Contacts, and Synchronization Choice
To solve said battery indicator issue, I figured I would download an app which needed a Live/Microsoft account. I’ve got one of those I’ve had for years and years. Since my data is off, I used WiFi to connect to the store. So far, so good. Here’s where we go off the rails. Now, I’m not an app guy. I could largely care less. These are phones for making calls. The only app I used for like 5 minutes when I was on WiFi was Draw Something. I had the American Airlines one, but meh – I’ll stick to regular boarding passes. I’ll kill a tree or 10 in my lifetime.
Anyway, so I was trying to find something to show my battery life and needed to use my Live account. I enter my info, find an app, and download it. Little did I know behind the scenes it took anyone in that IM list and added it to my contacts. WTF? Apparently, all new mobile OSes do this silly sync thing without asking you if you want to. On the Android phones I never had that issue because I don’t use gmail.
The story gets better. I had imported contacts from my SIM card already. That’s all I wanted. There was a filter button, but you click that and all of the contacts went away. In WP8 there is NO WAY TO JUST HAVE SIM/PHONE CONTACTS. Another WTF here. Even Android can do that. But we’re not even at the best part.
The only way to fix this? Remove the Live/MS account. Only way to do it? Reset the phone and lose everything. Yup. Epic fail here. On the Android devices I’ve owned and used, I could remove that account used for apps and go all native and totally disconnected without basically wiping the phone. Terrible, terrible experience. One of the worst I’ve had with any piece of tech in recent memory.
I don’t want to sync jack with the cloud or accounts. I want to use my SIM contacts and have this be a phone. I should be able to have that experience but apparently not.
Annoying Factor Four: MMS Messages
I hate texting but people sometimes feel the need to send me a text with a picture (another 1pt font yay). I don’t have a text plan so I’m already annoyed as is. Android and WP8 devices function the same – to get these infernal things, I have to turn data on. With a feature phone, they just show up with nothing special. I was wondering if WP8 would be the same as Android, and sure enough, it is.
Annoying Factor Five: The Start Screen
I’ve been running Windows 8 (the OS, not the mobile WP8) forever. I even have a touchscreen device (the Sony Vaio Duo). I like it. So the Metro/Modern thing isn’t a nemesis to me. But the Start screen on WP8 leaves a bit to be desired. I can’t put my finger on exactly why the OS works but the mobile OS doesn’t here. And no, I’m not looking for a desktop or file explorer on my phone.
So Where Does This Leave Me?
I don’t know. I’m only a few hours into the WP8 experience and I want to bash my head against a wall. Sadly, as dysfunctional and fragmented as Android is, it just kinda works and makes more sense to me. I’m far from a power user of phones, but WP8 is supremely dumbed down. You’d think WP8 – which is, let’s face it – targeted at people more like me who hate smartphones but can’t really get a feature phone would work for me but I’ve had such a negative reaction to it. The whole sync/contacts/reset thing really bothers me.
With WP8, many settings seem to be obscured. Android tries to obscure some stuff (for example, if you had to manually set up AT&T info), but you can ultimately get there. I’m not sure you can do that in WP8, nor am I sure I want to find out. I’m not getting an iPhone, the Android devices are better in terms of OS for my tastes but the smaller devices are not great phones and I don’t want a device above 4″ in screen (sadly, the Galaxy Note had great signal strength and could be a good phone; I don’t want a phablet attached to my head). Nokia’s non-WP Asha devices? I’ll pass.
Maybe I’ll just move to a cave and not have to worry about any of this. Or not. Time will tell. In the meantime, Ben will happily use his WP8 phone (he doesn’t like Android, BTW). Short of buying a Vertu Ascent, I’m proabably screwed. I shouldn’t have to spend stupid money to get a basic phone that is newer than 2008, works, and has a good signal. Oh, and isn’t a flip phone. Hate them, too.
Congratulations, folks! It’s official – as I predicted with in my Availability Groups FAQ just over a year ago, the term AlwaysOn has now become the new active/passive and active/active and annoys me just as much if not more. Every time I see it misused I want to proverbially stab myself in the eyes. I really tried to avoid writing this post. I truly did. But people have forced my hand because it’s become like using ur for your, witch for which, etc. – in other words, it’s just wrong and people need to start using the right words.
I swear people should be given a test: if you can’t use the right terminology, you can’t use the feature or speak on it. It’s ignorant otherwise and you’re perpetuating problems. This even goes for people I truly admire and respect – like Bob Ward (one of the best and most knowledgable people about SQL Server around PERIOD) who recently wrote what is an otherwise great blog on availablity groups, but is marred by literally using the wrong term (and see next paragraph for spelling of it; there is no space) the ENTIRE post. MS doing it (be it CSS or a PM) is especially egregious since it helps perpetuate the bad. ARG!
So what is AlwaysOn (one word, no space between Always and On – another common mistake)? It is a marketing term that covers two features in SQL Server: availablity groups (AGs) and failover clustering instances (FCIs). So their official names are technically AlwaysOn availability groups and AlwaysOn failover cluster instances (or failover clustering instances). Don’t believe me? Click here. A few of us had a spirited debate on Twitter awhile back, which I think prompted Jonathan Kehayias’ (blog | Twitter) blog post which predates this one. Part of the reason we had a friendly debate is that depending on who you ask over in SQL Server, AlwaysOn is either just AGs and FCIs, or to some, it covers all features that remotely have to do with availablity. Either way, it still does not equate to AlwaysOn = the availability groups feature.
As with the A/P and A/A, let me give you a bit of history. The availability groups feature in SQL Server 2012 went through a few name changes over the development of the product. It had a total of four (not necessarily in this order):
- AlwaysOn (yes, your eyes do not fail you)
- Availability groups
Let’s start with the first two. If memory serves me correctly, HADR was the first name for the feature which then morphed into HADRON. HADR was around for a long time. HADR was around so long that there are still remnants of it in SQL Server (such as the AG DMVs) that are not being renamed. Truth be told, I liked HADRON. It is easy to remember (a key hallmark for anything marketing) and it rolls of the tongue easily. However, switch two letters and you have a disaster waiting to happen. For those with delicate sensibilities, I apologize but let’s face facts. I can see why they probably didn’t go down that road. To add to the comment about Bob Ward, take a look at the titles of some of the CSS blog posts. (As an aside, you should bookmark that CSS blog site. They always post interesting stuff.) Many still use HADRON. They’re not helping any here.
That brings me to the dreaded term: AlwaysOn. Yes, it was the feature name for AGs for what seemed like five minutes, but it’s the one for some reason that most of the SQL dev PMs stuck on and started using in things like presentations, so I think that’s why it stuck. I don’t remember when marketing co-opted the term and made it cover the features of availability groups and FCIs, but it happened. I think that was much later in the dev cycle, which is also I think why the damage was done and somewhat irreversible.
The the term AlwaysOn (with or without space) has a long history with SQL Server going back nearly 10 years which adds to the confusion. The term Always On (with space) was introduced in SQL Server 2005 to brand a program to certify storage vendors with SQL Server (example: Dell’s whitepaper; EMC and Hitachi also have similar ones). In SQL Server 2008, Always On (with space) was used to cover availability in general – much like how it’s SUPPOSED to be used in SQL Server 2012. Don’t believe me? Here’s proof.
I leave you with this: STOP USING ALWAYSON IF YOU ARE REFERRING TO AVAILABILITY GROUPS. Please use the right terminology. Just as active/passive and active/active are incorrect for FCIs, AlwaysOn is wrong for AGs.
In the immortal words of Bartles and James, thank you for your support.
Last fall, I got my first new laptop in about 18 months which is an eternity for me. For a bit of my history see my blog post Laptops of Doom. The Panasonic CF-J10 served me well, but still wasn’t perfect. At 10.1″, 16GB of RAM, a 1TB SSD, 1 x USB 3.0 (albeit slow), a Core i7-2620M, and decent battery life with weight under 3lbs, you’d think I’d be happy as a clam. Even with the 1TB of SSD I had in there, it wasn’t enough room and the USB 3.0 was slow compared to other machines, which led me to believe it has to do with motherboard design (boo). It was starting to act up on me (trackpad not functioning right), so I knew I needed an interim solution. Enter the Sony Vaio Duo 11 (which I will do a review of at some point). I love the machine but its main weak link is only 8GB of memory, and it can’t be expanded since it’s basically soldered onto the motherboard. Welcome to 2013.
For the deliveries of my mission critical/high availability class last year in Australia, I took both machines with me – I was still well under 6lbs. I got an external mouse for the Panasonic and it was basically my demo box with Hyper-V and Windows Server 2012, and the Duo my main presentation machine. One nice thing about the Duo is the pen input for things like virtual blackboarding. Even with the Core i7 ULV, the Duo really seems snappier than my Panasonic. To date, I still haven’t yet sent the Panasonic back to Japan because I need a backup computer. The Duo only has regular Windows 8 (not Pro), so I’ve been using VMware Workstation a lot like I have in the past. I’ve been able to do my demos with an external USB 3.0 SSD drive, but they’ve been limited. Bottom line is I use both Hyper-V and VMware Workstation, and need a presenter machine. Sometimes one machine isn’t enough, so that got me thinking (and also because I want to be able to send my J10 in for repair) – what can I do?
My idea – crazy as it is/was – was to find a small, yet portable PC, connect it to my Duo via a crossover cable, and use Remote Desktop to get in.
Enter the small form factor (SFF) PC.
My weight requirements for schlepping still have not changed, and I am hanging out to see what happens once the Haswell-based laptops are introduced before even considering something new, but carrying around my Duo plus a small box running Hyper-V seemed somewhat reasonable if I can keep the solution portable. One reason I need this for some speaking engagements is that VMware Workstation allows me to show, say, Hyper-V and some scenarios like Live Migration easily. The problem is that you can’t easily run both Hyper-V and VMware Workstation at the same time under Windows 8 – you basically have to enable one and disable the other which you can’t really do mid-talk. The weight requirement has historically been difficult, since most SFF PCs with any power have been small by tower standards, but still kinda beefy and robust. That has changed recently.
Two boxes fit the bill: Foxconn’s AT-7000 series which you can get in Core i3 (AT-7300), i5 (AT-7500), or i7 (AT-7700) variants, and the Intel Next Unit of Computing (NUC), which currently only has a Core i3 (rumor has it they are going to introduce an i5 version soon). There are two main variants of the NUC – one that has ethernet built in (the DC3217IYE) and two HDMI outs, and one with one HDMI out and one Thunderbolt but no ethernet (the DC3217BY). With both, it’s basically a plug in your own RAM and storage and away you go.
There are some other technical differences between the AT-7000 series and the NUC, namely:
- The NUC has 3 x USB 2.0 ports (boo) and you would need an adapter for HDMI to VGA if you wanted to hook it up directly to something if HDMI was not an option (and it often isn’t when presenting; VGA is still the defacto standard). The AT-7000 has 4 x USB 3.0, 2 x USB 2.0, DVI and HDMI out (and they include a DVI –> VGA adapter in the box).
- Ethernet only comes with the DC3217IYE; you would have to get a USB one if you wanted ethernet for the DC3217BY. The AT-7000 series has it built-in, just like the DC3217IYE.
- The AT-7000 series also has microphone, headphone, and line out if you want such things (unnecessary for portable use), as well as a card reader (SD/MS/MMC).
- The NUC only has mSATA internally, whereas the AT-7000 can take a full 2.5″ drive.
You may be thinking that the AT-7000 is a slam dunk considering you can get your choice of processors and it has some better port options. On paper everything is always better.
I ordered both the DC3217IYE and the AT-7700 with the intention of returning one of them (we’ll see if that happens …) with 16GB of Corsair Vengeance RAM (DDR3 1600 MHz [PC3 12800]), a Mushkin 480GB mSATA SSD, and for the AT-7700, a StarTech mSATA –> 2.5″ adapter enclosure. I got the DC3217IYE and RAM from Amazon (they had better prices at the time, and with the NUC, a better return policy), and the 480GB SSD and the adapter enclosure from Newegg. Let’s look at my costs:
- DC3217IYE – $292.49
- RAM – $99.99
- Crossover cable – $2.52
- SSD – $439.99
- AT-7700 – $489.99
- mSATA adapter – $30.99
The NUC solution costs $834.99 and the AT-7700 solution $1063.48. Now, you may not need a 480GB SSD, and standard 2.5″ SSDs are cheaper than mSATA ones; I didn’t want to buy two sets of storage so here it really is the same thing for me. If you went down to the i3 ($339.99 at Newegg) or i5 ($399.99 at Newegg) and used a different SSD or even a standard platter-based drive, the NUC and AT-7000 series are basically parity cost-wise. Let’s say you pick the i3 AT-7300:
- AT-7300 - $339.99
- RAM – $99.99
- Crossover cable – $2.52
- 500GB 2.5″ SSD (Samsung 840; not the Pro which is about $130 more) – $335.09 on Amazon now
The total is $777.59 – cheaper than the equivlent NUC solution. If you splurged and spent the extra $60 on the AT-7500 to get an i5, it would be only a few dollars more than the NUC solution. Crazy!
The thing about these solutions is that I’m looking at them as portable hypervisors and that’s how I’ll largely talk about them, but they’re so cheap and cost effective that they can work for a lab solution for your own personal use or where you work – and they’re tiny. That can’t be underestimated.
In Part 2, I will cover some differences that may make a difference for some (including myself) such as size and weight, and talk about the process of configuring these little wonderboxes. The third part of this series will concentrate more on usability and how they work in conext of my workflow.
Happy Friday, everyone. We’re currently in the process of updating the calendar/schedule functionaliy here on the site, so I figured I’d let you know where Ben and I will be over the next month or two. A few events are pending and not here, but hopefully we’ll see you in person or virutally soon!
Wednesday, April 3
I will be presenting a webinar for Penton along with Melissa Data’s Joseph Vertido entitled Why Data Quality Matters – A DBA and IT Perspective. I’m looking forward to this as this is a topic near and dear to what I do with customers, but you don’t see me talk about a lot.
This is a free online event, so for more information and to sign up, click here.
Saturday, April 6
I will be presenting at SQL Saturday 203 – Boston. I’ll be delivering two different sessions back-to-back. First up at 2:45 PM is “Business Continuity: The Real Reason for HA and DR” and then winding up the day at 4 PM “Demystifying Clustering for the DBA“. It’ll be nice to be speaking to a hometown crowd since I’m often on the road somewhere.
This is a free event outside of the $10 lunch charge. The event is full, but get yourself on the waiting list – you never know! SQL Saturday will be held at the Microsoft office in Cambridge, MA (the regular one with the MTC, not the research facility), which is located at 1 Cambridge Center.
To register and find more information including the entire schedule and lineup of speakers, click on the link above.
Saturday, April 13
I will be presenting at SQL Saturday 211 – Chicago which is held just outside of the city at DeVry University (1221 N. Swift Rd., Addison, IL, 60101). Unlike Boston, I’m kicking off the day bright and early at 9 AM with “Demystifying Clustering for the DBA“. I think this is the third or fourth SQL Saturday in a row I’ve attended in Chicago, and it’s always a great event. The lineup of speakers looks great.
Like the Boston SQL Saturday, this is a free event outside of the $10 lunch charge. As of today, you can still register for the event. What are you waiting for? Go click on the link above.
Monday, April 22
I will be presenting to the the Northern Virginia (NOVA) SQL Server User Group. I always enjoy speaking at user groups, and I’m sure this will be no exception. So if you’re in the greater Washington DC/Northern Virginia/Maryland area, come on out.
Tuesday, April 23
Ben will be doing a free one hour webinar session on performance tuning – more details to come.
Thursday, May 9
Ben will be doing a full day webinar focused on SQL Server performance entitled Truth, Art, and the Zen of SQL Server Performance Tuning. Performance tuning is obviously a bigger topic than the hour Ben has on April 23rd, and this day he’ll get to go into much more depth. The day will be broken up into three sessions:
- Where Is My Performance Issue?
- Remove Your SQL Server Performance Barriers
- Index Tuning Foundations
This is not a free event; the cost is $199 which is a bargain considering you’re getting a full day’s worth of good content and can interact with Ben live. Click here for the complete information (including full abstracts) and to register.
Last year, I wrote a blog post entitled “How to Properly Configure DTC for Clustered Instances of SQL Server with Windows Server 2008 R2″. Today, I was helping a customer and found a few things. I’ve since updated that original post to reflect these findings and to fix a few things. I also took Windows Server 2008 R2 out of the title and the post’s name is now “How to Properly Configure DTC for Clustered Instances of SQL Server (Revised)” since it applies to Windows Server 2012 as well. The URL is the same as the old one, and that does reference W2K8 R2.
As was originally written, that article was assuming a new, greenfield installation. Not everyone is looking to create DTC with a brand new installation, and I realized that there were some gotchas I needed to outline.when you are already up in production. I also fixed a few things in the script.
1. In the script as it was written, one step added the resource as a dependency to SQL Server. That’s all well and good (more on that in a bit), but this had two issues with a non-greenfield implementation:
- When you make sure that DTC has the right permissions (“Step Two – Enable Network Access for the Newly Created DTC”), it needs to take it offline. When you add DTC as a dependency to SQL Server, it will take it offline as well. This is fine for a new installation, but not for one that is already up and running. I revised the script to NOT do this and make it a separate thing later.
- It doesn’t happen all the time, but we saw it on their installation and I also saw it once on my test WSFC (but not again – so it seems to be phantom). SQL Server Agent went offline. See Figure 1. That’s definitely not good.This is apparently a timing issue – DTC needs to be up before SQL Server. I explain all of this in a bit more detail in the revised post linked above.
Figure 1. SQL Server Agent down
- If DTC goes down, it will cause a SQL Server failover by default. You may or may not want this behavior. I’ve revised the original blog post with more information on this.
2. I forgot to put one important line in the script – namely moving the disk resource into the group with SQL Server! The original script did have a gotcha line in there, but I made sure it was done in this version.
My customer was trying to fix a critical path issue that needed DTC up and running for their application, so they wound up putting DTC in its own resource group which is a valid option still in Windows Server 2008, 2008 R2, and 2012. They only have one instance installed in the WIndows Server failover cluster, so they can ensure DTC lives where the SQL Server instance is all the time to reduce network traffic. This saved them from another possible outage to ensure DTC was up before SQL.
Hope this helps some of you out there …
Configuring the Microsoft Distributed Transaction Coordinator (DTC) for clustered SQL Server instances (FCIs) with Windows Server 2008 and later has been a confusing topic for many. The “definitive” word on the subject for awhile has been both of Cindy Gross’ blog posts ( “Do I need DTC for my SQL Server?” and “How to configure DTC for SQL Server in a Windows 2008 cluster”). In my opinion, with W2K8+, you really should create one DTC for each FCI and put it in with the SQL Server instance. This way it always lives on the node where the instance is currently running.
Having said that, what Cindy wrote doesn’t always work. In fact, the only truly reliable way some of us have found was talked about by my friend and fellow Cluster MVP (also a big SQL guy), Mike Steineke (blog | Twitter) in a post last week entitled “Clustered DTC and Multiple SQL Instances” where he shows how he had to configure DTC. Part of this is ensuring that even though DTC is in the group with SQL, it still gets its own name and IP address; using the SQL Server name can be problematic. To get this done, you really need to script it. I’ve done the work for you. Easy!
This updatated post stems from the blog post “Creating a Clustered DTC for SQL Server Redux“.
A sign that you need to use DTC for a given FCI can be found right in the SQL Server error log. If SQL Server cannot connect to a clustered DTC, you will see an error message like the following:
2013-03-12 10:23:51.670 spid1001 QueryInterface failed for “DTC_GET_TRANSACTION_MANAGER_EX::ITransactionDispenser”:0x8007138f(The cluster resource could not be found.).
2013-03-12 10:23:51.680 spid1001 QueryInterface failed for “ITransactionDispenser”: 0x8007138f(The cluster resource could not be found.).
This blog post now applies to Windows Server 2008, 2008 R2, and 2012 as well as SQL Server 2005, 2008, 2008 R2, and 2012 since depending on the OS you’re running, you may have a little of each …
Step One – Create the Clustered DTC in the Resource Group with the FCI (revised 3/12/2013)
The script below I actually created today and is based on my original blog post I wrote back in 2009 for Windows Server 2008 and takes into account what’s in Mike’s post. I actually used this script on a customer installation, so this isn’t something I wrote just for the hell of it. All you have to do is modify the variables up front and run the script. If you don’t know the name of some of the resources, you can use the PowerShell cmdlet Get-ClusterResource to show a list of everything.
For this update, instead of embedding the code, you can download the script from here. Much easier!
A sample execution is shown in Figure 1.
Figure 1. Running the DTC creation script
I also tested this script in Windows Server 2012 and it works just fine.
Step Two – Enable Network Access for the Newly Created DTC (revised 3/12/2013)
Once DTC is created, you must enable network access. This is where if you did things another way (i.e. not the way shown in this post), there’s a very high probability that when you went to go restart DTC (Step 11 below),the process may decide to step out for lunch and then stay away for a permanent vacation.
The steps below are all GUI. Nic Cain (Twitter | Blog) wrote up this blog post on how to configure this using PowerShell if you would prefer to script things.
- From the Start Menu, select Administrative Tools, and then Component Services (or just run DcomCnfg).
- Under the Console Root folder, expand Component Services.
- Expand Computers.
- Expand Distributed Transaction Coordinator.
- Expand Clustered DTCs. You should now see something that looks like Figure 2.
Figure 2. Newly created DTC in Component Services
- Right click the clustered DTC you just created and select Properties as shown in Figure 3
Figure 3. Opening the Properties of the DTC
- Select the Security Tab
- Under Security Settings, check the “Network DTC Access” box. Under Transaction Manager Communication, check both “Allow Inbound” and “Allow Outbound”. You should now see what is shown in Figure 4. If you need any other options (you know your app better than me), add them. Click OK when done.
Figure 4. DTC enabled for network access
- You will now be prompted to restart DTC as shown in Figure 5. If you are doing this against a production instance of SQL Server and you have added the DTC resource as a dependency of SQL Server, it will take SQL Server offline as well. If you cannot do this, select No, but you cannot really use DTC properly until you restart it. To get around this, remove DTC as a dependency of SQL Sevrer, do the restart, and then add it back. Click Yes to restart.
Figure 5. Prompting for the restart of DTC
- Click OK at the dialog shown in Figure 6 which denotes a successful restart.
Figure 6. Confirmation of DTC’s restart
Step Three – Testing DTC (added 3/12/2013)
Microsoft has a tool that will create a dummy transaction and can be found in KB 293799. While the best test is always your application, this will at least see if DTC is working. It requires an ODBC DSN to work.
- Start Data Sources (ODBC) from Administrative Tools. Note that under Windows Server 2012, you will specifically have a 32-bit and 64-bit version. Run the 32-bit.
- Select the User DSN tab.
- Click Add.
- On the Create New Data Source dialog, select SQL Server Native Client X, where X is the version of SQL Server. In Figure 7, it is SQL Server 2012, so it has a version of 11.0. Click Finish.
Figure 7. Selecting the version of SQL Server
- On the Create a New Data Source to SQL Server dialog which will look similar to the one in Figure 8, enter a name for your DSN and for Server, enter the name of the clustered instance of SQL Server. Click Finish.
Figure 8. DSN creation
- On the ODBC Microsoft SQL Server Setup dialog, click Test Data Source. An example is shown in Figure 9.
Figure 9. DSN created
- On a dialog similar to the one shown in Figure 10, if the test was successful, you should see the right output. If you don’t, you will see an error. Click OK.
Figure 10. DSN works properly
- Run dtctester using the syntax dtctester DSN username password, where DSN is the name of the DSN you just created, username is a valid user in SQL Server, and password is the password for that user. A sample execution is shown in Figure 11.
Figure 11. Runing dtctester
WARNING Do not create a System DSN. If you do, you may see an error similar to the one shown in Figure 12. This error stems from the fact that dtctester is a 32-bit program and the System DSN created is 64-bit.
Figure 12. dtctester error
Step Four – Optional Configuration Steps (added 3/12/2013)
The steps in this section are optional. Read to see if they apply to you.
xSetting the DTC Resources to Not Cause a SQL Server Failover
There are four resources associated with DTC: name, IP, disk, and of course, DTC. If you right click on any one of those and select the Policies tab, you will see what is in Figure 13.
Figure 13. Policies for the DTC Network Name
Notice the option “If restart is unsuccessful, fail over all resources in this service or application.” What this means is that if any one of those resource fails and cannot be restarted, it will cause SQL Server to fail and move to another node. If you are not sure if DTC is necessary but are creating it anyway OR you are OK with DTC failing but SQL still being up and living with whatever not working that DTC affects, deselect this option.
Adding DTC as a Dependency of SQL Server
In the testing my friend Mike did, the only way he could get things to work was to ensure that DTC was up before SQL Server. The only surefire way to do this is to add DTC as a dependency to SQL Server (see the previous section for possible implications of doing this). One of the problems that may occur and was noted in the blog post “” is that there is a chance that SQL Server Agent may go offline. We think this is possibly related to another bug that was fixed, but what is happening is that SQL Server Agent is trying to figure out the name of the instance it is using. If it gets confused, it will go offline. It may start, but it also may not work correctly. The only way it seems to work around this is to make sure that DTC is up before SQL Server starts, hence adding it as a dependency.
Once you get DTC up and running in the group with SQL Server, test your application against it and check in the SQL Server error log as noted above. If you see no DTC errors and everything looks fine in the app, you’re golden. You shouldn’t have to restart SQL Server to take advantage of the newly mapped DTC, but without an application right now, I can’t say this to be 100% true.
Adding a dependency to a resource in Windows Server 2008 and later does not require any downtime.
To add DTC as a dependency of the SQL Server resource in Failover Cluster Manager:
- Right click on the SQL Server resource and select Properties.
- Select the Dependencies tab.
- Click Insert.
- On the new line, from the dropdown select the DTC resource you created. An example is shown in Figure 14. If the line is at the bottom, make sure that the AND dependency is selected.
Figure 14. Adding DTC as a dependency
- Click OK when done.
To do this in PowerShell, execute the following:
Add-ClusterResourceDependency SQLResourceName DTCResourceName -Cluster WSFCName
A sample execution is shown in Figure 15 along with verifying that it was done correctly.
Figure 15. Creating and verifying the dependency in PowerShell
Mapping DTC to a Specific FCI
Right now I do not have an application to test specifically the combo of DTC and the FCI in the same group that all is well without adding the DTC resource as a dependency. Based on what we know, it may not work. One thing which may solve that issue and/or if you want to truly be anal about things, DTC provides you with a way to specifically map an instance of DTC to only be used with a specific application or service such as a given SQL Server instance. This is done via command line. If you really want to ensure that DTC is mapped to the FCI, you use the tmMappingSet option of the command line for DTC.
NOTE This particular aspect/option most likely will NOT be necessary but I’m putting it in here for the sake of being complete.
- Enter the command msdtc -tmMappingView * as shown in Figure X. If nothing is mapped, you will see the output in Figure 16.
Figure 16. No DTC mappings exist
- To map the newly created DTC to a specific FCI, use the command: msdtc -tmMappingView -name DTCname -service SQLServerService -clusterResourceName SQLResourceInWSFC, where -service would be MSSQLSERVER for a default instance or MSSQL$NMDINS where NMDINS is the part after the slash and -ClusterResourceName is the name of the SQL Sevrer resource in the Windows Server failover cluster (WSFC). For a default instance it would just be SQL Server, and for a named instance SQL Sevrer (NMDINS). A sample execution for a named instance is shown in Figure 17 and you should see Figure 18 if it is done correctly.
Figure 17. Mapping DTC to an FCI
Figure 18. DTC mapped successfully
- To verify things are correct, again enter msdtc -tmMappingView *. You shold now see something similar to Figure 19.
Figure 19. Successful mapping output
With the proliferation of smartphones and touch coming of age in the mobile and desktop world, we’re really starting to border on Minority Report territory. Applications are an end user’s interface into the online world, and they create the experience. Some are good, some are horrendous. We’ve all seen and used bad applications over the years. As a backend kind of guy who associates himself more in the IT space overall, it is all about the end user and/or the business. It’s something I have to remind some folks when I’m onsite doing work for them. We support them. Having said that, a few things lately have raise my dander – namely Code.org and this blog post (which I saw via Grant Fritchey’s blog post). Why do these bother me?
Let me say this up front: communication is a fundamental issue in many organizations and there’s enough blame pie to go around. Politics and other road blocks can slow any process down causing frustration everywhere. Poor communication and broken processes hurt more than help. However, process and communication should NOT be difficult. Implemented properly, they should be easy and transparent. Things like change management and source control are fundamental concepts in both IT and development, and should enhance things like high availablity. To me, they are cornerstones and core tenets of an overall availabilty strategy. However, how many of you DBAs use source control … go ahead … I’ll wait. In my experience, that number is very low. In a similar fashion, change control and things like testing are often viewed as a nuisance to developers and pushing direct to production can often lead to many headaches for DBAs. We annoy you, you annoy us; perfect recipe for a standoff.
I don’t disagree with the fundamental premise of Code.org. I remember getting started on a Commodore PET and then a 64 with things in BASIC like:
10 PRINT “HI”
20 GOTO 10
Even us IT guys need some programming skillz(tm). (See what I did there? I’m pandering to the hipster crowd.) Things like PowerShell or our very own Transact-SQL (T-SQL to those “in the know”) in SQL Server need you to understand at least fundamental programming concepts. Heck, there’s even WMI and SMO (among other things) if you really want to get adventurous in the Windows/SQL Server world. The longer video Code.org made is cute when the ask the kids what they want to be and when they asked them do they know what a computer programmer is. Most little kids want to be firemen, athletes, princesses, etc. That makes sense. Being a programmer is way too practical. I mean, you don’t see anyone at the age of 5 asking to be a sanitation worker or insurance broker, do you? I’m sure there are some kids, but they’d be a reaaaaaaaaaaaaaaaaaaaaally small percentage. Kids are allowed to be dreamers. Adults need to pay bills and live in reality most of the time.
But that got me thinking – why is it OK to glamorize just programming in the computer realm? Everything else is as, if not more, important in terms of the day-to-day running. Do you think Amazon.com runs on a single web server and only one database server? Heck no! Both the application they have and the backend are designed to scale and be available. If you’re going to teach programming, kids need fundamental IT basics, too. Things like understanding about backups would serve them well even in their daily lives. How may times to we have to hear things like, “I lost all of my photos on my external hard drive/phone and I didn’t have them anywhere else.” People’s lives are wrapped up in digital.
As I said, apps are the gateway for people. Be it a browser or one on your phone, it makes sense to say that programming and computers should be a fundamental part of one’s education curriculum earlier in life. That part I don’t disagree with. But what they are not telling you about is how to make it all work – the full view of the application lifecycle. It’s one thing to write an app. It’s another for it to work well and perform. Many of us who are consultants would have less opportunities if applications were written properly and scaled and supported what they needed to. I see way too often that applications are often the barrier for upgrades in many environments, leading to many different – and hard – problems to solve as time goes on. How many applications in your environment – third party or custom – support SQL Server 2008 or later? I bet not all do. Or if they did, your company won’t uprgade to it, forcing you to stay on an older version of SQL Server for other reasons like cost … and this puts not only your environment at higher risk as time goes on for issues, but your skills, too. SQL Server 2005 is now nearly 10 years old and four (4) major versions old. That’s like dog years in the technology world.
Developers also make false assumptions – like availability is only IT’s problem. It’s not. There’s stuff they need to do, too, yet they will blame us for their application woes when the application barfs after failover. Too bad, so sad. Now we all live with the pain of your stupid decisions and blinders that you need to be part of the solution from day one, not part of the problem. This is true whether you are using SQL Server, Oracle, or anything else. DBAs are often the last to know about an implementation – and thus, take a lot of heat and blame if/when things go wrong. DBAs need to be in on the planning from day one.
The thing that got me the most was the graphic in the NoDBA post – namely the bit where he has “Heroic Developers” and automatically associates things like bureaucracy and delays with data management. Huh? Sure, DBAs can be a pain in the tuchus. So can devs. Or network admins. Or storage admins (DBAs never have problems with storage folks, right?). You get the point. As I mention, process is a necessary evil for things like availability. In his post, Martin Fowler does acknowledge that one of the negatives of the dev to prod approach can be something like ”bypassing DBA groups may also mean bypassing operations groups that know how to keep valuable data backed up and secure”. Amen. But I still won’t allow a dev in production if I can help it.
The bottom line: in the real world, we’re all part of the solution, and may even be part of the problem to someone else. There’s a reason in most cases devs should not just push code – let alone untested code – out to the world. But IT also needs to be more aglie than it has been. That’s one of the things we do here at SQLHA – we help get organizations up to speed and during implementation, do not implement crippling process that does not work for anyone. This is one of the reasons why virtualization and concepts like the private cloud are taking hold – it’s a more agile deployment than procuring hardware for every new deployment. It doesn’t make it right or wrong, but it changes the dynamics. Smartphones and apps have changed the dynamics of even rapid application development. IT unforunately hasn’t always caught up to meet that demand.
It’s time for everyone to grow up. Kids should learn about computers right along with math and other key things, devs and developing applications is important, but so is IT. You really can’t have one without the other, so let’s find a better way to work together and start earlier. That way we can have less finger pointing later on. Deal?
Some of you noticed that our little corner of the web was down for a bit over the past week. We needed to do some stuff behind the scenes that made it necessary for us to take the public facing part down. This isn’t unlike some server maintenance where sometimes it’s just easier to take the short pain up front to have long term gain. I do notice a few blog comments somehow got zapped in this process if they came in after 1/24, and we’re trying to track them down. So if you don’t see your comment, please resubmit!
We apologize for any inconvenience you may have had trying to reach some of the resources on the site.
As forward and pushing boundaries as I can be with my little laptops of doom and in my job, I am quite the opposite when it comes to my cellular phone usage. I like my phone to be … wait for it … a phone. I do not do e-mail on it. I hate texting. I don’t surf the ‘net with a phone. I have a computer for things like that and e-mail. I need my cellular phone to be a good phone. Period. That means good signal strength and has to have a great speakerphone (I hate bluetooth headsets with a passion, too). A cel phone is essential for me since I’m on the road so much.
Go ahead, call me a luddite. I take a lot of lighthearted ribbing from many – and have for years. I don’t mind. A friend and colleague said this to me recently: “You need to be flash frozen like Han Solo and given to the Smithsonian to memorialize the last geek without a smartphone.” Before anyone chimes in with comments like, “You’ve never used a smartphone! You don’t know what you’re missing”, I have used, own, or owned quite a few. I’ve played with various versions of the iPhone since they’re so ubiquitous and many friends and colleagues have them. I have used data on some of these devices, so I did see if I would like it. I didn’t. Here’s my list of devices over the past few years:
People who know me or work with me know how to get in touch with me, so that really isn’t an issue. I’m not hard to find and electronically, e-mail has always been the easiest and preferred method. If I’m around, I respond. I can keep odd hours as most have realized. It’s really that simple. If it’s an emergency, you probably have my cel number or can get it. I have never felt the need to be connected 24×7. I think it leads to better life balance. There has to be a separation from work life and your personal life. I know too many people who constantly are fidgeting with their phones or checking e-mail long after work when they’re supposed to be out and relaxing. I am a big fan of boundaries when you can have them. Most of my hobbies are low tech. When I play bass, it’s just a bass and cord into an amp. I don’t use any kind of effects. I prefer vintage equipment for the most part. Sometimes keeping it simple is the best way … like dedicated devices (such as a CD/SACD player for listening to discs).
I remember going to London for the first time in 1999 (and quite a lot since). I remember seeing people texting all of the time and thinking that would never take off in the US. I was wrong. When I went to Japan in 2004 for the first time, people were always looking at their phones (1-seg TV and whatever else they do on phones is popular there). Maybe I’m odd, but I prefer talking to people for real in many situations. I have noticed over the years a lack of manners and civility since smartphones have taken over the world. People walking around staring at screens, not paying attention yet it’s your fault if they run into you. I’ve seen parents ignore kids at Disneyland (and that’s an expensive day for a family of 5). I see people at dinner - couples or families – ignoring their companion and staring at screens. I went to a movie with my Dad and four friends (I assume they were – they were sitting next to each other in the row in front of us) were all madly swiping and tapping away but not talking to each other. Why bother going out anywhere and with each other if your form of social is staring at a screen? It makes no sense to me.
For the record, I do like my Android-based Sony Walkmans, and the Tablet S is one of the best universal remotes I’ve ever used. I also like using the Tablet S as a portable sheet music device for rehearsals and gigs. The X1 was a well built phone and nice to look at, that’s for sure. That’s mainly why I bought it. It was my first real encounter with a touch screen. I went back to a regular phone soon thereafter.
Another thing I hate is the disturbing trend towards bigger phones that really started with the Galaxy Note. I bought that device mainly to use as a possible presentation whiteboard device. It didn’t work out for me that way, so here it sits in my unused tech pile. I never thought the phablet as they are now referred to would take off. How wrong I was! Heck, HTC is now making a phone-like remote to control your phablet. How ridiculous. Apparently Sony is working on a 6.44″ screen phablet. Oy vey. Where is the skinny jeans crowd going to put that? This is one of the reasons I bought the Xperia Ray – it’s one of the smallest smartphones made and I doubt we’ll see the likes of it again. It’s as close to a candybar featurephone as you’ll get now. I have basically been using it for the past year as my phone but it frustrates me as a phone. Like the X1, it has shown me that I yearn to have a numeric keypad – not a facsimile on a screen. I got the Tipo Dual before going to Australia last year since I wanted a dual SIM phone. Australia has the same 3G bands as AT&T in the USA, so it made sense. It’s a small device (which I like), but the touchscreen is worse than that on the Ray.
I’ve been doing research on new feature phones, and much to my dismay, there are very few featurephones made today. Most are clamshell (blech). The last real candybar phone made and that I’d probably consider is the Nokia Asha 300, but my two previous experiences with Nokia did not have happy endings (the Nokia 6500 slide and the 8800 were horribly failed experiments) so I probably won’t go there. For the most part, I’ve always had good luck with Sony Ericsson (now just Sony) phones. The T637, W600, K850i (the first 3G phone I ever owned), C510a, Xperia Pureness, and Cedar all served me well over the years. The T637 is one of my favorite phones of all time along with the W600. I only got the K850i just to get 3G. The C510a got passed to a friend (and it died recently). The Xperia Pureness was form over function unfortunately, and Cedar died a premature death. It won’t even charge. One thing I need for any phone I ever use is all (or most of) the relevant world bands to have connectivity. That’s both the beauty and curse of GSM-based phones, especially if you go to places like Japan which are different from Europe, Australia, and North America.
This weekend, I made the decision to revive and bring back into service my old K850i and stop using the Ray. I had enough of fiddling with it despite liking its overall size and form factor. The K850i is all set up now/again, so it’ll be interesting to see how it fares again with daily use. It definitely shows its battle scars, but it still works nearly 6 years later. That says a lot. The only downside is that it comes from a time when charger ends were proprietary so no universal USB charging for all of my devices. I do have a WM-Port to USB adapter for my Android Walkmans, ironically enough.
The thing I am probably looking forward to is outside of talking a lot (which sadly doesn’t happen since most people don’t use cel phones to talk), I will be able to go lengthy amounts of time without having to charge the K850i. The screens and power consumption of most smartphones forces us to have power at the ready, like that stupid Duracell Powermat commercial with Jay-Z.
Don’t worry, Ben happily uses a smartphone (currently the HTC 8X I believe) to its fullest extent, so my ways don’t permeate all of SQLHA.
Next Page »
Can you believe it’s been almost four years since the publication of Pro SQL Server 2008 Failover Clustering (Apress; Print | Kindle | PDF) and six since Pro SQL Server 2005 High Availability (Apress; Print | Kindle | PDF)? I can’t. I want to thank everyone who has purchased it and contacted me (coming up to me at a conferecec, e-mailing, etc.) or written a positive review of them, especially the 2008 book. It’s very humbling as well as gratifying to know that all that time spent is worth it. Technical books are not vanity projects nor do you get rich – these are not New York Times bestsellers, which makes them hard propositions for publishers to begin with.
To be honest, I wasn’t sure I’d write follow up. With all of the changes in SQL Server 2012, doing just a book based on FCIs wouldn’t do, so it would need to be bigger. If it was going to be bigger, I wanted to do it more like the 2005 book and tell the whole story but that would be even bigger than just covering features. I first visited possibly writing a book in 2011 when I took a stab at an outline and submitted it to Apress. We went through 6 revisions and at the end of the day, it was just not what they wanted (which was something smaller like my 2008 book). I didn’t want to compromise the vision and integrity of the book. Don’t send them hate mail; they are in the business of making money and my books as noted are not NYT bestsellers. Over time they sell well. It was then I started looking into self publishing. I also would love to be able to be agile and correct things (like some stuff I either dislike or now is wrong – like the DTC thing which I blog about here).
I have been on the fence but did utilmately commit to doing something. My inner writer couldn’t not (I know, double negative) do a book. But it would be a lot of work, especially on my own and with the scope I was looking at, a printed book was no guarantee mainly for cost and size reasons. Timing is everything, and with a topic like mission critical, you need lessons learned so having something out by RTM of SQL Server 2012 wasn’t necessary. The perfect time is usually 12 – 18 months after RTM (which is where we are now). Fast forward to fall. SQL Server 2012 SP1 is released and Windows Server 2012 RTMs, which means that I can write one book and cover everything including all supported OSes (and their variants like Server Core) as well as the patching story. RTM-based book wouldn’t have that. Because I was busy and on the road most of last fall, I shelved things for a bit.
Over the past few weeks I have finally felt inspired again to pick this project back up. I put together a revised outline. I had some people look it over and boy was I over ambitious. Apress was right in a way; that book would not only NOT be digestible, but it’d never get done. I want to tell the whole story, but I’m also not stupid. Here’s where I am:
- Based on the outline comments, I have enough content for three (or four) books. That is the approach I am going to take: split things up into chunks. Right now I’m only committing to the first book, which is the HA technologies and related topics (let’s call that Volume I). Volumes II and II may get written, but let me get through I first!
- Breaking this up allows more agility to get content out quicker and be more focused. As a reader, you will arguably be less overwhelmed, too!
- It will definitely be available as ePub and mobi (for Kindle).
- I may be able to offer a print version (even if on demand) if the size is manageable and cost isn’t bad. Or, if any tech book publishers are interested in picking the book up, contact me.
- I have no formal ETA on Volume I, II, or III. I would like to get Volume I out by TechEd in North America which is about 6 months from now, but no promises.
- I will not announce a release date or give anything like pre-order information until I am at a point where I know I can give a real date.
That is the update. It’s all good news. I hope you’re as excited as I am and the final product will live up to my previous efforts which have been well received.
I appreciate people who have e-mailed me that my 2008 book is out there on sites which shall not be named. Sigh. As you may guess, I do not get rich off of writing. If I calculated the time spent in getting it out the door, I’d lose money. You can’t keep people from doing what they do, but all I ask is that you should try to support authors by buying books. I know times are tough out there and whether it is $29.99 or $69.99, that can be a lot of money to some. That doesn’t make downloading it for free right. It just inhibits people like me from writing future books.