Citrix – Changing the Default Program – File Association Script

 

When installing Office on our Citrix farm in support of our Great Plains deployment, it changed the XML file association rendering our Billing invoice XML presentation process useless. Our Back Office staff notified us that all of their XML-based invoices were opening in Notepad.

Here’s the script we used to correct the default association program:

 

Assoc .xml=xmlfile

Ftype xmlfile=”C:\Program Files\Internet Explorer\IEXPLORE.EXE” –nohome

 

Good luck,

DS

Posted in Uncategorized | Leave a comment

SCSM 2012 SP1 Cumulative Update (CU) 2 Installation Fails on Action _Installhealthserviceperfcountersforpatching

Received this little surprise installing SCSM 2012 CU2:

An error occurred while executing custom action:  _Installhealthserviceperfcountersforpatching”

The setup log contains:

InstallCounters: LoadPerfCounterTextStrings() failed . Error Code: 0x80070002. momv3 “C:\Program Files\Microsoft System Center\Service Manager 2010\MOMConnectorCounters.ini” InstallPerfCountersHelper: pcCounterInstaller->InstallCounters() for the default counters failed. Error Code: 0x80070002. MOMConnector InstallPerfCountersLib: InstallHealthServicePerfCounters() failed . Error Code: 0x80070002. InstallPerfCountersLib: Retry Count : . InstallHSPerfCounters: Failed to install agent perf counters. Error Code: 0x80070002.

Note – We upgraded our SCSM 2010 environment in-place. Thus, the folder path.

Turns out, we were missing a registry key in PROD.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\MOMConnector

Found it and exported it from DEV and all is good.

DS

Posted in SCSM 2012 | Tagged , | Leave a comment

SCSM 2012 Doesn’t Like Employees Changing Managers

Discovered a nasty bug in Service Manager 2012 after one of our Developers moved to a new manager. While AD managed the change and modified his manager accordingly, SCSM stopped associating his Review Activities (RA) with the new or old manager name. It displayed a blank entry and the manager would never see the approval email.

We deployed a service offering where code deployers request access to production servers (Add User to Local Admin runbook) and, after managers approval, the access is granted for a specific period. At the end of that period, the user is automatically removed. The user token taken from the portal is added to the Service Request (SR) and the RA automatically associates the user’s manager as the approving authority.

When the user moved to another group, SCSM 2012 lost the association to the new and old manager.

My team (Will S. took ownership and hit a home run for us) opened a ticket with MS Premier Support and below is the solution Ruth provided (Good job Ruth!).

Issue:

When one user submitted a service request with a review activity including “Line Manager Should review” the “Reviewers” field was populated but the manager name for this user was blank.  This would stall the RA until you manually updated the manager field, thus delaying what should be an automated process.

Cause:

Ultimately, it was found that this user’s manager had changed at some point since the initial import into Service Manager and the previous manager relationship was not removed. The user appeared to have 2 managers, which when the Line Manager was added, Service Manager tried to add both, and eventually neither were added.

Resolution:

To resolve this issue, you obtain the relationship type and then the relationship instance from the Service Manager database.

/** Change username to the impacted user**/

select R.RelationshipId, R.IsDeleted, BMEuser.DisplayName as username, BMEmgr.DisplayName as manager

from BaseManagedEntity BMEuser

join Relationship R

on R.TargetEntityId=BMEuser.BaseManagedEntityId

join BaseManagedEntity BMEmgr

on R.SourceEntityId=BMEmgr.BaseManagedEntityId

where R.RelationshipTypeId=‘4A807C65-6A1F-15B2-BDF3-E967E58C254A’ /**This is the Employee Has Manager relationship GUID used by SCSM 2012**//

and BMEuser.Name like ‘%username%’

If there is more than one manager entry, use the RelationshipID for the invalid manager in the below Powershell removal command.

Get-screlationshipinstance -id “E20E4F3D-6CBC-93CF-CE51-C57059226CD3” | remove-screlationshipinstance -confirm

Future RA’s should reflect the correct manager’s name.

DS

Posted in Orchestrator, SCSM 2012 | Tagged , , | Leave a comment

Ask.com Toolbar – Really Oracle?

Ask.com Toolbar - Really Oracle?

Oh, yes, I’ll be glad to install the Ask.Com toolbar. It’ll be especially helpful while the  internet invents another advanced search technology.

It’ll look great next to my Yahoo, AOL, and Bing toolbars.

Seriously, didn’t we leave toolbars with the dial-up era?

It’s bad enough I have to install Java.  Sigh.

Happy “Burn Out” Thursday!

DS

Image | Posted on by | Tagged , , | Leave a comment

SCSM 2012 Cube Jobs Start but Never Finish

Occasionally my SCSM 2012 cubes, either started manually or kicked off by scheduled overnight jobs, would never stop. It’s like the handshake between the console and the SSAS server fails.  Keep in mind that running cubes from the SCSM 2012 console (either manually or automatically) take 2-3 times longer than running them from the SSAS, but shouldn’t take hours to complete.

To help me establish a run time baseline, I manually ran all of my cubes with a PROCESS FULL criteria and recorded the times. None of my manual processing took over 5-minutes, so any job running over an hour from my console was probably a dead job. This was my approach to fixing this problem. I can’t say this enough, evaluate it in your test environment:

  1. Stop the job at the console.
  2. Check the StatusID (as described by Danny Chen) of the cube and modules. They should all be stopped.
  3. Reset the StatusID of any stalled job and module to 3 (Not Started)
  4. Manually Process Full the failed cube at the SSAS to ensure you have no failed DIMS. Any error message shows the failed DIM.
  5. Process failed DIMS by a) Unprocess b) Process Full.
  6. Try processing the Cube again. If successful, un-process the Cube. So, a) Process Full b) Good Run c) Unprocess
  7. Reset the watermarks for only the failed cube to 0.
  8. Manually run the cube job from the Data Warehouse via PowerShell.
  9. If it still keeps running, start at step 1 and try again, but skip step 6. Sometimes leaving the cubes Processed before a manual start fixed the jobs.

Step 4 is where I would find the root cause of my occasional cube failures. SCSM 2012 isn’t very adept at communicating bad DIMS back to the console and this appears to simply stall the job in a running state. The DIM fix suggested by Thomas Ellermann’s blog post is an “all or nothing” approach that will usually correct cube processing problems, but wouldn’t fix my stuck jobs. For my issue, I had two bad cubes and this seemed like overkill to me to reprocess all of the DIMS, so processed cubes manually at the SSAS, although Microsoft doesn’t recommend it since this will place the Data Warehouse watermarks out of synchronization. Should you decide to manually process, it’s important that you update your watermarks as described above.

After un-processing the bad DIMS, I manually processed the DIMS and then manually processed the cubes via SSAS. I verified both of the cube jobs by running them via PowerShell with no issues. The next morning I found that all cube jobs ran successfully and my cubes have been stable ever since.

DS

Posted in SCDW, SCSM 2012 | Tagged , , | Leave a comment

Troubleshooting SCSM 2012 Cubes

Although I could stabilize my cube processing through the SQL Server Analysis Server, I was not able to successfully stabilize the SCSM 2012 environment and see my nightly runs complete. I could probably get to this state if I re-installed the SCSM 2012 Data Warehouse, but this situation will happen again and reinstallation is not a long term solution for each outage.

After I corrected my nightly run schedule issue (discussed here), the DWMaintenance, MPSyncJob, and the ETL jobs and all but one cube ran as expected. The SystemConfigItem job failed with “Object reference not set to an instance of an object.”

Figure 1 Object reference not set to an instance of an object

Opening a ticket with Microsoft we reviewed these SCSM 2012 Data Warehouse server’s Operations Manager log errors:

Event ID 33526 Incorrect user information detected. The domain name, user name or password were not valid. Ignoring the given credentials and proceeding with the default workflow account:

User information detected:
Domain name:
User name:

Per the SCSM Engineers, this is not a problem and can be ignored since the default workflow account has the necessary rights.

Event ID 33566 – An Exception was encountered while trying to process a cube. Cube Name: SystemCenterConfigItemCube Exception Message: Object reference not set to an instance of an object. Stack Trace: at Microsoft.SystemCenter.Warehouse.Olap.OlapCube.GetPartitionsToProcess(Cube asCube) at Microsoft.SystemCenter.Warehouse.Olap.OlapCube.Process(ManagementPackCube mpCube).

The key to the error is the “GetPartitionsToProcess.” One of my ConfigItem Cube partitions was missing an “object.” To validate SCSM Partitions for a particular cube, I ran this query against DWDataMart

select from etl.cubepartition where CubeName =‘SystemCenterConfigItemCube’

This listed all of the associated partitions. Reviewing each one, I noticed that all but one had FACTS for each month since June (installation date), except for ComputerHostNetworkAdapater monthly partition, which was missing September. I scripted out the missing FACT using one of the other existing month FACTs.

I then re-ran the cube job successfully via PowerShell.

After further research, my theory is that the DWMaintenance Job fails or stalls and is stopped.  This would explain why some of the why I’m missing all the FACTs in a DIM.  This happened to me again and I solved it by recreating the FACTs.  

 

Event ID 33545 – MP Sync did not associate the lower version of the Management Pack

This was due to an out-of-sync third party MP that I corrected with the vendor.

Event ID 4000 – A monitoring host is unresponsive or has crashed. The status code for the host failure was 2164195371.

This was occurring every 25-minutes. Under the guidance of a Microsoft engineer, we ran C:\Program Files\Microsoft System Center 2012\Service Manager\Tools\SMTracing tool and I provided the resulting capture to the engineer after recording the event occurrence. If this is fixed, I’ll post the solution with a separate post. This is a very difficult issue to fix and I’m still working with Microsoft to find a solution.

Root cause – The System Center Orchestration Configuration Library never finishes. Check and verify the Deployment Status.  It’s probably stuck in a running state.  

The fix is to open a ticket with Microsoft Premier Support and ask them to fix the Deploy Sequence in the database.  I don’t fully understand what was completed, so I don’t want to post anything until I understand.  It’s a quick fix, but let the MS professionals walk you through it.

Event ID 11366 – The Microsoft Operations Manager Scheduler Data Source Module failed to initialize because some window has no day of the week set.

The fix for this is discussed here.

Here’s an overview of the process I followed to solve problems with my cubes. In a nutshell, I didn’t find a holistic solution to the failing process and each experience is probably different. My goal is to provide you quicker access to the varying solutions located throughout the web.

How I Measured Success – A Goal of Processed Cubes

  1. Able to run Travis Wright’s script successfully end-to-end (this can take over an hour depending on how long it’s been since a good run).
  2. See the following via the \Data Warehouse\Data Warehouse\Data Warehouse Jobs display for each job:
    1. Enabled – Yes
    2. Status – Not Started
  3. See the following via the \Data Warehouse\Data Warehouse\Cubes display for each cube:
    1. Status – PROCESSED
    2. Schema Pending Changes – NO
    3. Last Processed Date – Current or accurate with the time your processed the cube.
  4. The SCSM Data Warehouse Jobs that process the cubes successfully run each at the local time, as scheduled

Note: Criteria Item 1 is probably the most important as it provides a more realistic end-to-end run on what you’ll see from your automated processing.

Things You Should Know Before Troubleshooting SCSM 2012 Cubes (IMHO)

  • Manually processing the cubes from the Analysis Server and then running the SCSM Data Warehouse jobs is not going to cure the problem for the long term and is not recommended as an approach by the experts I talked to.
  • The Data Warehouse server PowerShell command (get-SCDWJobSchedule) reflects job schedules in GMT versus local time.
  • Log on to the SCSM servers with the same service account you use as the data warehouse RUN AS account.
  • Just because a Data Warehouse job is stopped doesn’t mean the module is. Verify the job status with get-scdwjobmodule –jobname “NAME OF JOB” (i.e. get-scdwjobmodule –jobname “Process.SystemCenterWorkItemsCube”). You Data Warehouse job will stall in a “running” state If the module is still running when you re-start the process.
  • SCSM 2012 checks the version of your SQL Server Analysis Server. If it’s the Enterprise version, SCSM 2012 will process the cubes incrementally (PROCESS INCREMENTAL) if the cubes have already been processed fully (PROCESS FULL). This will keep your processed jobs from future failures. This functionality isn’t available in Standard.
  • Read Danny Chen’s 8-part blog and become an expert on how this process works. 1, 2, 3, 4, 5, 6, 7, and 8.
  • If a Cube job starts via the console and runs longer than an hour or two, this is not normal. Run the cube manually at the SSAS and you’ll probably find a bad DIM. Unprocess bad DIMs, then process them FULL.

Troubleshooting Steps

Here are the paraphrased steps I followed to fix my cubes (details are in the section below) after they failed:

As with any code, you should run this in a test environment first.

  1. Ensured I was running SQL 2008 Enterprise.
  2. Looked at the Ops Manager event log for error ID 33573 – “The operation has been cancelled due to memory pressure.” If you see this on your SSAS, add more RAM. I have 32 GB of RAM, so I haven’t seen this issue, but there are number of posts about it.
  3. Apply any pending schema changes (see below for details) via \Data Warehouse\Data Warehouse\Cubes — Apply Schema Changes
  4. RUN-ETL.ps1  – Successful? Yes? Then your cubes should run as scheduled. FAIL?  Follow the steps detailed in the below section “Manually Verifying SCSM 2012 Cubes”
  5. Let your jobs run overnight and verify everything is working as expected.

Manually verifying SCSM 2012 Cubes

How to ensure your cubes will even process.

 

1. Via the OLAP server, PROCESS FULL all the cubes (process all or just the failed cube) 

Failures here were associated with dimensions missing key data. The error messages show the offending object and you can correct it this by processing the individual dimensions separately (see below). My failed DIMS show a state of Process Update so I always Unprocess, and then Process Full. Eventually, all your cubes will process.

2. Unprocess the cubes at the SSAS.

This places the cubes in a Process Full requirement for the next processing and if your watermark batchID is 0, SCSM will complete a full process updating the watermark. Check the properties of the cube for an Unprocessed state.

3.  Confirm that DWDataMart Database watermarks are setup for a Process Full.

When you manually process the cubes and then un-process them before your nightly runs, the watermark batchids are usually reset. To confirm this:

Use DWDataMart

Go

select * from etl.cubepartition

This returns a result set showing either a batchID from a completed process or a batchID of 0.  SCSM uses this watermark to determine the type of run for the next schedule event.

Anything other than a 0 will result in a incremental build, while a 0 runs a full process. To have all of your cubes reset, simply update the batchID to 0 after un-processing all your cubes manually.

update etl.CubePartition set WatermarkBatchId = 0 where CubeName = ‘SystemCenterCubeNameHereCube’

4.  Start the appropriate cube processing job for the failed cube and wait for the results.

I’ve spent a lot of time working through this issue and may have left off a process or tip, so comment below if you need more guidance.

DS

Posted in SCDW, SCSM 2012 | Tagged , | 9 Comments

Nightly Data Warehouse Cube Jobs – System Center Cube Jobs Do Not Run As Scheduled

2 out of my 3 Data Warehouse areas are running as expected. The Data Warehouse ETL and the Cube processing are stable, but the scheduled nightly runs weren’t kicking off as expected. DWMaintenance kicks off my process each night at 10 pm and then the other job were spaced throughout the night, allowing for each job to complete before the next one started.

Heads up – PowerShell Data Warehouse jobs display schedules as GMT while your SCSM 2012 console shows your local time.

Once I manually set the schedules via powershell, the DWMaintenance, MPSyncJob, and the ETL jobs and all but one cube ran as expected. ConfigItemCube would not start as scheduled and returned the following error:

Event ID 11366 – The Microsoft Operations Manager Scheduler Data Source Module failed to initialize because some window has no day of the week set.

I had manually set all of my cubes to run daily via PowerShell after the GUI schedule stopped working.

Set-SCDWJobSchedule -JobName Process.SystemCenterConfigItemCube -ScheduleType Weekly -WeeklyFrequency Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday -WeeklyStart 06:00

Set-SCDWJobSchedule -JobName Process.SystemCenterWorkItemsCube -ScheduleType Weekly -WeeklyFrequency Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday -WeeklyStart 07:00

There is a bug in the SCSM 2012 GUI and/or PowerShell scheduling process and the way to fix it is to manually update the offending Management Pack – Microsoft.SystemCenter.Orchestration.Configuration.xml

This management pack coordinates the Data Warehouse jobs. Since you can’t export this management pack, you use PowerShell to export the entire suite of MPs to a local directory using this PowerShell command:

get-scsmmanagementpack -computername <servername> |export-scsmmanagementpack -path <path>

Copy and save the Microsoft.SystemCenter.Orchestration.Configuration.xml to another location. You always want to update the copy and save the master copy.

After that, open Microsoft.SystemCenter.Orchestration.Configuration.xml and search for “SystemCenterConfigItem” in the XML text. Evaluate the “<Scheduler>” tag. Notice how mine has a sync time of 00:00 (midnight)? Look at the Interval unit and the setting of 0, which is the repeat cycle. Based on the settings the job runs each night and midnight and kicks off every zero seconds. The job won’t run with those settings.

<Scheduler>
<SimpleReccuringSchedule>
<Interval Unit=”Minutes”>0</Interval>
<SyncTime>00:00</SyncTime>
</SimpleReccuringSchedule>
<ExcludeDates />
</Scheduler>

The PowerShell and/or the GUI scheduler hammered this entry. To correct this problem, copy and paste this working <Scheduler> text into each cube processing scheduler section, but change the start times appropriately:

<Scheduler>
<SimpleReccuringSchedule>
        <Interval Unit=”Days”>1</Interval>
        <SyncTime>07:00</SyncTime>
        </SimpleReccuringSchedule>
        <ExcludeDates />
</Scheduler>

Reimport the MP with this command:

import-scsmmanagementpack -computername ussjcscsm002prd -path <path>/Microsoft.SystemCenter.Orchestration.Configuration.xml

Wrap Up

After the import, check the Data Warehouse Operations Manager event log for this entry:

Event ID 1201 New Management Pack with id:”Microsoft.SystemCenter.Orchestration.Configuration”, version:”7.5.1561.0″ received.

This is your indication that all went well.

You should un-process all your cubes at the SSAS (or not, your call) and let the regularly scheduled processes run overnight. This should fix your scheduling problems.

DS

Posted in SCDW, SCSM 2012 | Tagged , | 5 Comments