Windows Surface Pro and Direct Access Hit a Home Run


Our company has a very mobile Sales team who travel most of the United States and part of Mexico. Their success, as is much of our company, is tied to an agile response to Sales opportunities and customer relations. While laptops and Cisco Any-Connect did meet their needs, both were clunky and Cisco Any-Connect accounted for over 600-service desk incidents in 2012. Our traders and IT staff also detested Cisco Any-Connect and the required token key, although using their smart phones to obtain a key did alleviate having to carry around that annoying RSA dongle.

Simply, we needed to mobilize their “at the office” desktop experience and make traveling transparent to their technology. Direct Access delivers this!

We took two routes – 1) iPad using Citrix Receiver 2) Windows 8 Surface Pro or Windows 7 laptop using Direct Access

After internal IT testing, we selected Windows Server 2012 Direct Access and Windows 8 Surface Pros for some of our Sales staff and senior Portfolio Managers. We started with our “early adopters” and then let technology envy do the rest (we have a very competitive group of users and when someone sees that they are falling behind because their peer has better technology, they want it). Our users immediately told us how much more they liked Direct Access but, as expected, were struggling with Windows 8 (really missed the Start button). We overcame this challenge with some one-on-one training and a one-page reference sheet (search and you’ll find plenty of examples). They quickly became comfortable the OS and overlooked the nuances after experiencing the mobility of the new VPN and tablet. For those users who wanted to stay with their Windows 7 laptops, we added DA to their system. That alone improved the mobile user experience. Eventually, all our mobile users were very pleased with this experience and this positive response continues.

It’s important to note that we are a Windows environment (SharePoint workflows, server, Win7 desktop, and MS SQL databases) using third-party applications and proprietary .NET solutions. Everything that worked in Windows 7 worked in Windows 8. We don’t experience many of the non-Microsoft pain points.

While the Citrix iPad solution worked well and delivered a reliable product, the user experience was so different they eventually stopped using iPads for business processes (other than email). Our users also disliked the additional “clicks” to obtain the final business information from the Citrix-published applications. We continue to support this option, but it’s not nearly as enterprise effective as DA and the windows-based devices.

Security is always an issue for us and removing the RSA key requirement for VPN didn’t increase our exposure. You can’t log in to our network with Direct Access without an active AD account. Your device also has to be in a special AD OU and you have to log in at least once at the office to receive a certificate. Only devices we issued and control can access our network after the company’s approval. Any device with a Citrix Receiver app, valid RSA key, and valid AD account can remote in. This increases network exposure. Simply, none-AD objects can access your network.

Here’s other reasons why the Surface Pro and Direct Access work better than the iPad Citrix solution:

  • Identical user experience anywhere when connected. Wi-fi, network cable, whatever.
  • Drive mapping works immediately when connected. We were pleasantly surprised by how much this was desired by our user base.
  • Single sign-on process to access their work environment when traveling.
  • Excellent n-tier application performance.
  • Full Microsoft Office experience.
  • Internet Explorer 10 is faster.
  • DA allows for two-way connectivity. We can now ensure our remote users receive SCCM 2012 patching and software deployments remotely.
  • Remote desktop support with Dameware or TeamViewer is much easier.
  • You can still use Cisco Anyconnect as a backup VPN solution.
  • DA client is part of the Windows 8 CAL and cheaper than the Citrix solution.
  • While the costs of a Surface Pro and ancillary equipment is more costly than an iPad (+$300-ish), it is cheaper than our standard laptop with docking station (-$600).

What we don’t like about the Surface Pro but liked about the iPad:

  • Battery life (4-hours for the Pro but over 10 for the iPad).
  • Lack of internal network connectivity (Verizon, for example).
  • We had to reimage each of our Surface Pros with Windows 8 Enterprise. We do this for all our systems anyway, using SCCM 2012, but still wanted to raise this as an issue for other teams.

Things you need to remember:

  • This is IPv6 and while we haven’t experienced any communication issues with IPv6, it is different. Research it and understand the differences.
  • Server 2012 Direct Access is ready for prime time while Server 2008 isn’t.
  • Windows 8 DA is much easier to install than Windows 7.
  • High-availability or Business Continuity for DA is painful, but achievable.
  • Learn to use IE 10 compatibility mode. We overcame all of our issues using this or F12.
  • Direct Access only works on certain versions of Windows 7 Ultimate or Enterprise and Windows 8 Enterprise (You’ll have to reimage your Surface Pros)
  • There are plenty of Direct Access and IPv6 troubleshooting sites, but here is a good one. Also, here’s one specifically for Windows 7. Our issues almost always point to a time problem with the Surface Pro or laptop time being greater than 5-minutes off.

Our mobile staff is more nimble, capable, and spend much less time on the phone with my help desk staff, which means their devoting more time to the job and not to the technology.


Home run!



Posted in Uncategorized | 3 Comments

Citrix – Changing the Default Program – File Association Script


When installing Office on our Citrix farm in support of our Great Plains deployment, it changed the XML file association rendering our Billing invoice XML presentation process useless. Our Back Office staff notified us that all of their XML-based invoices were opening in Notepad.

Here’s the script we used to correct the default association program:


Assoc .xml=xmlfile

Ftype xmlfile=”C:\Program Files\Internet Explorer\IEXPLORE.EXE” –nohome


Good luck,


Posted in Uncategorized | Leave a comment

SCSM 2012 SP1 Cumulative Update (CU) 2 Installation Fails on Action _Installhealthserviceperfcountersforpatching

Received this little surprise installing SCSM 2012 CU2:

An error occurred while executing custom action:  _Installhealthserviceperfcountersforpatching”

The setup log contains:

InstallCounters: LoadPerfCounterTextStrings() failed . Error Code: 0x80070002. momv3 “C:\Program Files\Microsoft System Center\Service Manager 2010\MOMConnectorCounters.ini” InstallPerfCountersHelper: pcCounterInstaller->InstallCounters() for the default counters failed. Error Code: 0x80070002. MOMConnector InstallPerfCountersLib: InstallHealthServicePerfCounters() failed . Error Code: 0x80070002. InstallPerfCountersLib: Retry Count : . InstallHSPerfCounters: Failed to install agent perf counters. Error Code: 0x80070002.

Note – We upgraded our SCSM 2010 environment in-place. Thus, the folder path.

Turns out, we were missing a registry key in PROD.


Found it and exported it from DEV and all is good.


Posted in SCSM 2012 | Tagged , | Leave a comment

SCSM 2012 Doesn’t Like Employees Changing Managers

Discovered a nasty bug in Service Manager 2012 after one of our Developers moved to a new manager. While AD managed the change and modified his manager accordingly, SCSM stopped associating his Review Activities (RA) with the new or old manager name. It displayed a blank entry and the manager would never see the approval email.

We deployed a service offering where code deployers request access to production servers (Add User to Local Admin runbook) and, after managers approval, the access is granted for a specific period. At the end of that period, the user is automatically removed. The user token taken from the portal is added to the Service Request (SR) and the RA automatically associates the user’s manager as the approving authority.

When the user moved to another group, SCSM 2012 lost the association to the new and old manager.

My team (Will S. took ownership and hit a home run for us) opened a ticket with MS Premier Support and below is the solution Ruth provided (Good job Ruth!).


When one user submitted a service request with a review activity including “Line Manager Should review” the “Reviewers” field was populated but the manager name for this user was blank.  This would stall the RA until you manually updated the manager field, thus delaying what should be an automated process.


Ultimately, it was found that this user’s manager had changed at some point since the initial import into Service Manager and the previous manager relationship was not removed. The user appeared to have 2 managers, which when the Line Manager was added, Service Manager tried to add both, and eventually neither were added.


To resolve this issue, you obtain the relationship type and then the relationship instance from the Service Manager database.

/** Change username to the impacted user**/

select R.RelationshipId, R.IsDeleted, BMEuser.DisplayName as username, BMEmgr.DisplayName as manager

from BaseManagedEntity BMEuser

join Relationship R

on R.TargetEntityId=BMEuser.BaseManagedEntityId

join BaseManagedEntity BMEmgr

on R.SourceEntityId=BMEmgr.BaseManagedEntityId

where R.RelationshipTypeId=‘4A807C65-6A1F-15B2-BDF3-E967E58C254A’ /**This is the Employee Has Manager relationship GUID used by SCSM 2012**//

and BMEuser.Name like ‘%username%’

If there is more than one manager entry, use the RelationshipID for the invalid manager in the below Powershell removal command.

Get-screlationshipinstance -id “E20E4F3D-6CBC-93CF-CE51-C57059226CD3” | remove-screlationshipinstance -confirm

Future RA’s should reflect the correct manager’s name.


Posted in Orchestrator, SCSM 2012 | Tagged , , | Leave a comment Toolbar – Really Oracle? Toolbar - Really Oracle?

Oh, yes, I’ll be glad to install the Ask.Com toolbar. It’ll be especially helpful while the  internet invents another advanced search technology.

It’ll look great next to my Yahoo, AOL, and Bing toolbars.

Seriously, didn’t we leave toolbars with the dial-up era?

It’s bad enough I have to install Java.  Sigh.

Happy “Burn Out” Thursday!


Image | Posted on by | Tagged , , | Leave a comment

SCSM 2012 Cube Jobs Start but Never Finish

Occasionally my SCSM 2012 cubes, either started manually or kicked off by scheduled overnight jobs, would never stop. It’s like the handshake between the console and the SSAS server fails.  Keep in mind that running cubes from the SCSM 2012 console (either manually or automatically) take 2-3 times longer than running them from the SSAS, but shouldn’t take hours to complete.

To help me establish a run time baseline, I manually ran all of my cubes with a PROCESS FULL criteria and recorded the times. None of my manual processing took over 5-minutes, so any job running over an hour from my console was probably a dead job. This was my approach to fixing this problem. I can’t say this enough, evaluate it in your test environment:

  1. Stop the job at the console.
  2. Check the StatusID (as described by Danny Chen) of the cube and modules. They should all be stopped.
  3. Reset the StatusID of any stalled job and module to 3 (Not Started)
  4. Manually Process Full the failed cube at the SSAS to ensure you have no failed DIMS. Any error message shows the failed DIM.
  5. Process failed DIMS by a) Unprocess b) Process Full.
  6. Try processing the Cube again. If successful, un-process the Cube. So, a) Process Full b) Good Run c) Unprocess
  7. Reset the watermarks for only the failed cube to 0.
  8. Manually run the cube job from the Data Warehouse via PowerShell.
  9. If it still keeps running, start at step 1 and try again, but skip step 6. Sometimes leaving the cubes Processed before a manual start fixed the jobs.

Step 4 is where I would find the root cause of my occasional cube failures. SCSM 2012 isn’t very adept at communicating bad DIMS back to the console and this appears to simply stall the job in a running state. The DIM fix suggested by Thomas Ellermann’s blog post is an “all or nothing” approach that will usually correct cube processing problems, but wouldn’t fix my stuck jobs. For my issue, I had two bad cubes and this seemed like overkill to me to reprocess all of the DIMS, so processed cubes manually at the SSAS, although Microsoft doesn’t recommend it since this will place the Data Warehouse watermarks out of synchronization. Should you decide to manually process, it’s important that you update your watermarks as described above.

After un-processing the bad DIMS, I manually processed the DIMS and then manually processed the cubes via SSAS. I verified both of the cube jobs by running them via PowerShell with no issues. The next morning I found that all cube jobs ran successfully and my cubes have been stable ever since.


Posted in SCDW, SCSM 2012 | Tagged , , | Leave a comment

Troubleshooting SCSM 2012 Cubes

Although I could stabilize my cube processing through the SQL Server Analysis Server, I was not able to successfully stabilize the SCSM 2012 environment and see my nightly runs complete. I could probably get to this state if I re-installed the SCSM 2012 Data Warehouse, but this situation will happen again and reinstallation is not a long term solution for each outage.

After I corrected my nightly run schedule issue (discussed here), the DWMaintenance, MPSyncJob, and the ETL jobs and all but one cube ran as expected. The SystemConfigItem job failed with “Object reference not set to an instance of an object.”

Figure 1 Object reference not set to an instance of an object

Opening a ticket with Microsoft we reviewed these SCSM 2012 Data Warehouse server’s Operations Manager log errors:

Event ID 33526 Incorrect user information detected. The domain name, user name or password were not valid. Ignoring the given credentials and proceeding with the default workflow account:

User information detected:
Domain name:
User name:

Per the SCSM Engineers, this is not a problem and can be ignored since the default workflow account has the necessary rights.

Event ID 33566 – An Exception was encountered while trying to process a cube. Cube Name: SystemCenterConfigItemCube Exception Message: Object reference not set to an instance of an object. Stack Trace: at Microsoft.SystemCenter.Warehouse.Olap.OlapCube.GetPartitionsToProcess(Cube asCube) at Microsoft.SystemCenter.Warehouse.Olap.OlapCube.Process(ManagementPackCube mpCube).

The key to the error is the “GetPartitionsToProcess.” One of my ConfigItem Cube partitions was missing an “object.” To validate SCSM Partitions for a particular cube, I ran this query against DWDataMart

select from etl.cubepartition where CubeName =‘SystemCenterConfigItemCube’

This listed all of the associated partitions. Reviewing each one, I noticed that all but one had FACTS for each month since June (installation date), except for ComputerHostNetworkAdapater monthly partition, which was missing September. I scripted out the missing FACT using one of the other existing month FACTs.

I then re-ran the cube job successfully via PowerShell.

After further research, my theory is that the DWMaintenance Job fails or stalls and is stopped.  This would explain why some of the why I’m missing all the FACTs in a DIM.  This happened to me again and I solved it by recreating the FACTs.  


Event ID 33545 – MP Sync did not associate the lower version of the Management Pack

This was due to an out-of-sync third party MP that I corrected with the vendor.

Event ID 4000 – A monitoring host is unresponsive or has crashed. The status code for the host failure was 2164195371.

This was occurring every 25-minutes. Under the guidance of a Microsoft engineer, we ran C:\Program Files\Microsoft System Center 2012\Service Manager\Tools\SMTracing tool and I provided the resulting capture to the engineer after recording the event occurrence. If this is fixed, I’ll post the solution with a separate post. This is a very difficult issue to fix and I’m still working with Microsoft to find a solution.

Root cause – The System Center Orchestration Configuration Library never finishes. Check and verify the Deployment Status.  It’s probably stuck in a running state.  

The fix is to open a ticket with Microsoft Premier Support and ask them to fix the Deploy Sequence in the database.  I don’t fully understand what was completed, so I don’t want to post anything until I understand.  It’s a quick fix, but let the MS professionals walk you through it.

Event ID 11366 – The Microsoft Operations Manager Scheduler Data Source Module failed to initialize because some window has no day of the week set.

The fix for this is discussed here.

Here’s an overview of the process I followed to solve problems with my cubes. In a nutshell, I didn’t find a holistic solution to the failing process and each experience is probably different. My goal is to provide you quicker access to the varying solutions located throughout the web.

How I Measured Success – A Goal of Processed Cubes

  1. Able to run Travis Wright’s script successfully end-to-end (this can take over an hour depending on how long it’s been since a good run).
  2. See the following via the \Data Warehouse\Data Warehouse\Data Warehouse Jobs display for each job:
    1. Enabled – Yes
    2. Status – Not Started
  3. See the following via the \Data Warehouse\Data Warehouse\Cubes display for each cube:
    1. Status – PROCESSED
    2. Schema Pending Changes – NO
    3. Last Processed Date – Current or accurate with the time your processed the cube.
  4. The SCSM Data Warehouse Jobs that process the cubes successfully run each at the local time, as scheduled

Note: Criteria Item 1 is probably the most important as it provides a more realistic end-to-end run on what you’ll see from your automated processing.

Things You Should Know Before Troubleshooting SCSM 2012 Cubes (IMHO)

  • Manually processing the cubes from the Analysis Server and then running the SCSM Data Warehouse jobs is not going to cure the problem for the long term and is not recommended as an approach by the experts I talked to.
  • The Data Warehouse server PowerShell command (get-SCDWJobSchedule) reflects job schedules in GMT versus local time.
  • Log on to the SCSM servers with the same service account you use as the data warehouse RUN AS account.
  • Just because a Data Warehouse job is stopped doesn’t mean the module is. Verify the job status with get-scdwjobmodule –jobname “NAME OF JOB” (i.e. get-scdwjobmodule –jobname “Process.SystemCenterWorkItemsCube”). You Data Warehouse job will stall in a “running” state If the module is still running when you re-start the process.
  • SCSM 2012 checks the version of your SQL Server Analysis Server. If it’s the Enterprise version, SCSM 2012 will process the cubes incrementally (PROCESS INCREMENTAL) if the cubes have already been processed fully (PROCESS FULL). This will keep your processed jobs from future failures. This functionality isn’t available in Standard.
  • Read Danny Chen’s 8-part blog and become an expert on how this process works. 1, 2, 3, 4, 5, 6, 7, and 8.
  • If a Cube job starts via the console and runs longer than an hour or two, this is not normal. Run the cube manually at the SSAS and you’ll probably find a bad DIM. Unprocess bad DIMs, then process them FULL.

Troubleshooting Steps

Here are the paraphrased steps I followed to fix my cubes (details are in the section below) after they failed:

As with any code, you should run this in a test environment first.

  1. Ensured I was running SQL 2008 Enterprise.
  2. Looked at the Ops Manager event log for error ID 33573 – “The operation has been cancelled due to memory pressure.” If you see this on your SSAS, add more RAM. I have 32 GB of RAM, so I haven’t seen this issue, but there are number of posts about it.
  3. Apply any pending schema changes (see below for details) via \Data Warehouse\Data Warehouse\Cubes — Apply Schema Changes
  4. RUN-ETL.ps1  – Successful? Yes? Then your cubes should run as scheduled. FAIL?  Follow the steps detailed in the below section “Manually Verifying SCSM 2012 Cubes”
  5. Let your jobs run overnight and verify everything is working as expected.

Manually verifying SCSM 2012 Cubes

How to ensure your cubes will even process.


1. Via the OLAP server, PROCESS FULL all the cubes (process all or just the failed cube) 

Failures here were associated with dimensions missing key data. The error messages show the offending object and you can correct it this by processing the individual dimensions separately (see below). My failed DIMS show a state of Process Update so I always Unprocess, and then Process Full. Eventually, all your cubes will process.

2. Unprocess the cubes at the SSAS.

This places the cubes in a Process Full requirement for the next processing and if your watermark batchID is 0, SCSM will complete a full process updating the watermark. Check the properties of the cube for an Unprocessed state.

3.  Confirm that DWDataMart Database watermarks are setup for a Process Full.

When you manually process the cubes and then un-process them before your nightly runs, the watermark batchids are usually reset. To confirm this:

Use DWDataMart


select * from etl.cubepartition

This returns a result set showing either a batchID from a completed process or a batchID of 0.  SCSM uses this watermark to determine the type of run for the next schedule event.

Anything other than a 0 will result in a incremental build, while a 0 runs a full process. To have all of your cubes reset, simply update the batchID to 0 after un-processing all your cubes manually.

update etl.CubePartition set WatermarkBatchId = 0 where CubeName = ‘SystemCenterCubeNameHereCube’

4.  Start the appropriate cube processing job for the failed cube and wait for the results.

I’ve spent a lot of time working through this issue and may have left off a process or tip, so comment below if you need more guidance.


Posted in SCDW, SCSM 2012 | Tagged , | 9 Comments