Live website statisticsLive website statistics

ConfigMgr and back again

Automating life, one Bit at a time.

Active Package Distributions

no comment

Our ConfigMgr infrastructure is quite geographically dispersed, with many site offices and site servers in remote parts of the world, often with poor speed LAN and WAN links. This often leads to a massive backlog of active package distributions as packages are being transferred between sites.

ConfigMgr ships with a report to enumerate active package transfers – All active package distributions (Report ID: 136) but this is basically just a query on the v_DistributionPoint, v_Package and v_PackageStatusDistPointsSumm views in the database returning all package records where the distribution state is not 0.

This is great; it’s very helpful to know which packages have been requested to be transferred to the distribution points, but that’s about it. It doesn’t tell you any useful information such as which ones are actively transferring, how much data has been transferred, how much is remaining etc. If any packages get ‘stuck’ they will remain in the report and there’s no way of knowing whether they are still transferring or whether they have fallen into the transfer abyss.

The only place to locate this kind of information is in the sender.log file on the site server sending the package. Looking at the sender.log for more than about 30 seconds during an active package distribution is enough to give me a headache though, never mind trying to interpret it. I therefore decided it would be a good idea to write a simple application that would read in and interpret the sender.log and display all the details in a nice legible format.

The below application is the result of that line of thought. I have made it available in the Downloads section [Link] in case anyone else suffers the same package distribution frustrations that I do, and I pray that Microsoft develop such a utility, or incorporate it into the interface for ConfigMgr 2012.

Sender Analyser Image

OSD with USMT password to Rebuild

no comment

In a New Build scenario, where a machine is PXE booted to initiate an OS deployment, there’s an option in the PXE service point properties on each PXE-enabled site system to specify a password required for computers to boot using PXE. This is very convenient as it enables us to advertise the New Build task sequence to all machines, put in a check to ensure that the TS is running in WinPE using the _SMSTSinWinPE = True task sequence variable and this prevents (a) users being able to kick off a full rebuild from within windows and (b) users being able to PXE boot their own machine and kick it off without the password.

What happens however, if you want to be able to advertise a Refresh scenario build to all machines so that a machine can be upgraded or rebuilt using a USMT task sequence without having to add individual machines into a collection on an ad-hoc basis. Luckily the solution was very simple.

I wanted to replicate the password functionality of PXE boot builds, so I created a very simple 6-lines of code program to pop up a password box. If the correct password was entered, it would exit with code 0, if an incorrect password was entered, or the user closed the application using the X, it would exit with code 1. I packaged this up and sent it out to my DPs.

I then went to my task sequence, right-clicked and went to Properties, then navigated to the ‘Advanced’ tab. There is then an option to Run another program first, just like in package/program chains. I selected my Password prompt application as below:

Task Sequence Advanced Tab Image

I could then advertise my task sequence to any machine I wanted, and when I ran the advert, I would be presented with the following dialogue box:

Rebuild Password Image

If I then enter the incorrect password or close using the X, it fails with exit code 1, which in turn fails the parent program (the task sequence). If I enter the correct password, it returns code 0 and continues to execute the rebuild process.

The good thing about doing it this way is that I can also advertise the same task sequence to my New Builds collection. When machines PXE boot, they just receive the task sequence and not the pre-TS password application, so they can boot as normal, but still get the password box prompted by the PXE Service point. I now have a single architecture task sequence that installs 64-bit Windows 7 on machines with an x64 capable processor, 32-bit Windows 7 on those without, that will run a new-build scenario when PXE booted, and a refresh with USMT and Package Mapping scenario when run within Windows, after entering the correct password. All thanks to just 6 lines of code.

PackageMapping – Populating Relationships

no comment

When creating relationships between Add/Remove programs display names and Package ID/Program names, it occurred to me that manually modifying the database is tedious and time-consuming. Even creating procedures to import directly from the ARP table in the CCM database is time-consuming, removing the junk items and generally tidying up the clutter.

I therefore decided to spend 15 minutes putting together a really simple front-end to manage the links. It allows the creation of mappings based on a package/program (i.e. to map the same program to multiple ARP names), which is much easier to manage than creating individual ARP display name values and then having to input the same package ID and program for each one. It’s also much easier to see what you already have and gives the ability to search for existing packages or mappings to modify, delete or add new ones.

MDT DB App Image

MDT DB App Image 2

I have made this available via the Downloads section of this site [Link], there are however a few things to bear in mind:

  1. I am not a developer, therefore what I’ve put together is probably poorly coded and full of bugs, but it seems to do the job.
  2. It requires a precise design for the PackageMapping table (shown below), including the addition of a ‘Comments’ field which isn’t there as standard.
  3. It requires .Net Framework 3.5 client on the machine that’s running it, but a database server/instance and database can be specified for remote execution.
  4. The user running the program will require Connect, Select, Update, Insert and Delete permissions on the MDT Database to perform all functions.
  5. Always make a full backup of the database before using any third-party tools on it, especially mine.
  6. I provide the tool free of charge in the case that others can benefit from it, but cannot be held responsible for any loss of data that results from its use.

PackageMapping table design required:

  • ARPName [Primary Key] : nvarchar(255) : Allow Nulls – False
  • Packages : nvarchar(255) : Allow Nulls – True
  • Comments : nvarchar(MAX) : Allow Nulls – True

Package Mapping Design Image

PackageMapping Configuration

no comment

There are numerous guides showing how to configure MDT Package Mapping (otherwise known as Application Mapping or Application Migration), including an MSDN article (Link). That is not the aim of this post. I’ve just configured Package Mapping for use with OSD and came across some obstacles along the way. The aim of this post is to outline those obstacles and the steps taken to overcome them.

First of all I created the RetrievePackages stored procedure as per the MSDN article above. I, as lots have done before, decided to modify it slightly to use Add/Remove Programs display name rather than Product Code for simplicity and manageability. My stored procedure therefore looked like this:

/****** Object: StoredProcedure [dbo].[RetrievePackages] ******/
@MacAddress CHAR(17)
/* Select and return all the appropriate records
based on current inventory */

SELECT * FROM PackageMapping
WHERE a.ResourceID = n.ResourceID AND
MACAddress0 = @MacAddress

I then modified the CustomSettings.ini within my Settings Package to include the elements described in the MSDN article and sent it out to my distribution points. I populated the PackageMapping table with some test application display names and linked those to their equivalent package ID and program name. I ran the deployment process and of course, it failed.

There was no task sequence failure, and nothing was logged of interest in the SMSTS.log, the failure occured performing tasks against the MDT Database. The following was logged in the BDD.log when trying to execute the RetrievePackages procedure:

Error -2147217887 : Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.

This error didn’t really seem to indicate anything in particular to me, so I decided that, like 90% of other database issues, the problem was down to permissions. The connection was being attempted using the computer’s machine account, and so as this was a variable that would change on every machine that would be built, I granted Connect, Select and Execute permissions on the MDT database for ‘Domain Computers’. I kicked off the process again, and again it failed with the same problem.

The step where the information contained within the CustomSettings.ini is retrieved and the information queried from wherever the CustomSettings.ini data references (in this case, the MDT Database) is part of the MDT task sequence item set, ‘Gather’. Looking in the task sequence for all instances of Gather I notice that the only one before the point where the machine gets joined to the Domain, is a local only gather. There was another gather that uses the rules within CustomSettings.ini, but this was after the Join Domain step.  The point where the items retrieved from the CustomSettings.ini gets processed is the State Restore. I therefore needed another Gather between joining the Domain and performing the state restore. It was at this point that I decided to restructure it slightly to accomplish another requirement….

Package Mapping can’t be turned on or off on a per-machine basis. It’s either on for the task sequence, or off for the task sequence, and these changes involve modifying the CustomSettings.ini and sending the modified file out to all DPs. I decided I wanted to do something about this. I wanted the ability to have one Settings package that would stay on the DPs, but also have the ability to test Package Mapping before making it live, and have one task sequence that uses it, and one that does not, so that there’s a choice whether to Migrate Applications or not.

CustomSettings.ini Image

The permission changes I’d already made worked well and a connection to the database was succesfully performed, the procedure was successfully executed, the data was successfully retrieved, and the packages were installed as part of the ‘Install Multiple Applications’ step later on in the task sequence as additions to the MDT ‘PACKAGES’ base variable.

I then ended up with two task sequences, one that included the additional Gather to retrieve from the PackageMapping.ini that I could use to migrate applications (and to initially test the process), and one task sequence without this additional Gather which could be used to migrate user files and settings, but not applications.

Hardware Inventory Issues

no comment

We were doing a large scale roll-out to over 12,000 clients for an important piece of software. When checking the reports for advertisement status, the number of successful installations were as hoped for and incredibly positive. We knew that there would be a number of machines (predominantly laptops) that would need to run it over the days that followed when they next contacted the network so we left it a week before checking on it again. A week went by and the vast majority of installs had occurred and the report now indicated a lot more successful installations. However, the collection of machines that was used to make up the pool of machines to target didn’t reflect the numbers we were seeing in the advertisement status report.

The collection query was designed to show all machines that did not have ApplicationX installed and was set to update it’s membership daily. This collection however, showed over 1000 machines that were reporting back as having successfully installed the program, indicating that although successfully running the program, it was not installed.

Ordinarily this usually indicates a problem in the query or a problem in a program exiting with code 0 when in fact it failed. After much head-scratching I generated a list of a few machines that had reported back as having successfully ran the program but persisted in the collection. Execmgr.log on the clients indicated that it had indeed ran (for the right amount of time) and had then exited with code 0. I then selected one of the clients and opened Resource Explorer. There was no entry for the program in question in the Add/Remove Programs node. This prompted me to look at the Workstation Status node whereby I found out that the last hardware inventory took place over a week ago (despite the client agent being set to perform inventory daily).

I then checked a couple of the other clients in question and noticed they all had the same last inventory date.

I then ran the built-in report “Computers not inventoried recently (in a specified number of days)”  (Report ID: 72) for the last 9 days and it brought back over 2000 resources, including the 1000+ that were missing from the collection. It was clear to see from the list that these resources all shared the same Management Point, a Primary site server.

At this point I assumed it was a problem with clients either performing hardware inventory or sending the inventory data to the management point. I confirmed the client agent was still set to perform Hardware inventory daily, which it was. I checked a few clients, performed a manual hardware inventory cycle and observed the inventoryagent.log on the client and saw the successful completion of the inventory and the sending of the report to the management point:

Inventory Agent Image

I then hopped onto the server and monitored the, dataldr.log and MP_Hinv client log. Nothing was logged for the clients in question in either of the two logs, nor was anything created in the dataldr inbox.

This therefore indicated that the inventory data wasn’t even making it to the management point, rather than the issue being inserting the data into the database.

I was recommended the use of the MPTroubleshooter tool (which I’d never heard of before,probably due to lack of requirement for it) from the ConfigMgr 2007 Toolkit (Link). Running this against the deduced problematic management point and it immediately highlighted the cause of the issue (If the management point is Windows Server 2008 or higher, verify that the BITS Web Extensions for ConfigMgr is enabled – Failed).

MPTroubleshooter Image

I opened up IIS Manager 7 on the management point, expanded the CCM site, selected the CCM_Incoming folder and then at the bottom of the features view, opened BITS Uploads. This is what I saw:

BITS Uploads Image

The check box ‘Allow clients to upload files’ wasn’t checked and so client inventory files never made it to the management point. I enabled this option again and monitored the MP_Hinv.log client log on the management point and within seconds, hundred of inventory files began flooding through.

Looking at the timestamp for the last hardware inventory on the 1000+ clients that failed to send inventory data, it was the same day for all of them. This day was the same day that KB977384 and ConfigMgr 2007 R3 was installed on all primary site servers. This was the only instance of this happening however.

PXE Service Point Failures

no comment

When performing a general infrastructure health check and reviewing the site status messages one day, I noticed that for a few sites, in the Component Status view, the SMS_PXE_SERVICE_POINT component was showing as Critical Status due to an Availability of ‘Failed’:

Component Status Image

I immediately contacted the office in question and was informed that PXE was functioning fine. I hopped onto the secondary site server in question and checked the Windows Deployment Services service and this was running. I opened up the SMSPXE.log from the client logs directory on the server and observed machines successfully contacting the PXE server and retrieving advertised task sequences.

I then opened up the pxecontrol.log from the server logs directory and the problem became evident. The log was reporting “PXE test request failed, status code is -2147467259 ‘ Error receiving replies from PXE server’”:

pxecontrol.log Image

The first IP address used to perform the PXE test was that of an ISCSI adapter. This obviously failed and then subsequent adapters failed. All our PXE Service Points are set to respond to requests on All Adapters. The server list used by the PXE service point to perform availability checks is populated with the list of addresses of all network adapters on the system, in the order defined in the Adapters & Bindings Connection list. I confirmed the ISCSI adapter was at the top of the connection list so I changed the priority so that the local NIC was at the top of the list and first in the priority order. This had the result shown below:

Adapter Order Image

pxecontrol.log After ImageThe change was detected in registry and applied. Then exactly 5 minutes after, the local addresses were added to the array in the revised order.

Upon performing the first test with the correct NIC, the request succeeded and no further test was necessary. Shortly after, this then updated the status in the ConfigMgr console; the component status went green and Availability showed as ‘Online’:

Component Status - After Image

Duplicate Execution Requests

no comment

A while ago now we started noticing a problem with Duplicate Execution Requests being shown in the execmgr.log on clients (“A Duplicate Execution Request is found for program <X>”) . In general, this problem occurs when a client receives a mandatory advert for a program that it already has an execution request for but is yet to run. The client sees that it now has two advert execution requests in the system for the same program and, not wanting to (or being able to) execute one in priority of the other, it logs a Duplicate Execution Request and executes neither.

In our enviornment we use a combination of collection maintenance windows, Wake-on-LAN for out-of-hours deployments, dynamically populated collections based on installed software and recurring adverts. The combination of these, but in particular the latter, led to a number of these events being logged. Here is an example of the sort of thing that was happening:

Example environment:

A mandate exists for all software installations to occur outside of working hours.

  • CollectionX contains a dynamic collection of machines with ProductX NOT installed.
  • CollectionX has an assigned maintenance window of 01:00 – 04:00 for out-of-hours deployments.
  • Hardware inventory is run daily and machines that get ProductX installed, drop from CollectionX.
  • AdvertX targets CollectionX with a recurring mandatory daily advert to install ProductX.
  • AdvertX is scheduled to run at 03:00 every day.

Example process:

  • MachineX is powered on at 08:30 on Monday by UserX.
  • MachineX rececives policy for AdvertX at 08:33.
  • AdvertX changes state to waitingServiceWindow.
  • MachineX is shut down by UserX at 17:30 on Monday having not run program.
  • MachineX is powered on by Wake-On-LAN by the recurring advert at 02:00 on Tuesday.
  • MachineX receives a new policy for the recurring advert.
  • MachineX has not yet run the policy from Monday that is still pending a service window.
  • MachineX therefore has 2 mandatory requests for the same program in the system and cannot run either.
  • This process repeats every day thereafter, resulting in the software never getting installed.

There are a number of potential solutions (not all applicable to our scenario):

  1. Set the advertisement start date/time to the same as the mandatory assignment date/time so that a machine cannot receive the policy before it is set to run it.
  2. Set the advert to ignore maintenance windows so that if the machine misses the mandatory assignment date/time, it will run the program at the next available moment rather than the next recurrence.
  3. Turn recurrence off and set the program to always re-run (to include re-installations) so that the advert only goes out once to the desired collection, and then deal with any failures separately.
  4. Create a maintenance window during the day that would allow installations before UserX logs off (for example, if the reason for the out-of-hours maintenance window is to reduce the load around logon).

The following image shows an example of the behaviour as logged by the execmgr. The advert recurrs every week. In this example, the machine received the policy some time before the 04/01/2011, and was then woken up at 03:00 to install it, whereby it received the second execution request and from there on after, permanently had multiple requests in the system. Note, in the below example there is also a day-time maintenance window starting at 11:00.

Duplicate Execution Request Image

Translate this page


Recent Posts


August 2017
« Jan