I’ve been using USMT via OSD to migrate user profiles and settings using a zero-touch approach with no user interaction at all. This was working fine but I thought it would be nice if as well as user data, it also migrated applications. The UDI wizard discovery and selection page is a nice feature but I wanted it to be zero-touch, so I decided I’d incorporate the same discovery and selection but without displaying the wizard. This way, I could still use the same rules and mappings and the UDI wizard designer UI to control the migration behaviour, but without displaying the wizard to the user.
The way this was accomplished was pretty easy, it just required 3 new task sequence steps in place of the UDI Wizard call and a custom vb script.
The first step calls the AppDiscovery executable just like the UDI wizard does, and passes it the required parameters.
Command Line: AppDiscovery.exe /readcfg:”%scriptroot%\UDIWizard_Config.xml.app” /writecfg:”%temp%\AppDiscoveryresult.xml.app” /log:”%temp%\AppDiscovery.log”
Where /readcfg points to the location of the Configuration XML from UDI, /writecfg points to the location to write the discovery XML and /log points to the location to create the log file.
Start In: %deployroot%\tools\osdresults
The second step runs the custom vbscript which reads the XML file generated by AppDiscovery.exe, finds the applications that were detected and selected, and then sets them to their respective task sequence variables.
Command Line: cscript.exe “%ScriptRoot%\Custom\SetAppVariables.vbs” “%temp%\AppDiscoveryresult.xml.app” 64
Where ‘Custom’ is the name of a custom folder within the MDT Files Package ‘Scripts’ folder, AppDiscoveryresult.xml.app is the same as the /writecfg switch in the ‘Run AppDiscovery’ step above, and 64 is the architecture of the Operating System being deployed.
The third step runs one of the built-in UDI scripts to serialize the XML file which sets the task sequence variable ‘ApplicationList’ from the XML file contents.
Command Line: cscript.exe “%deployroot%\tools\osdresults\OSD_SerializeXmlApp.vbs” > “%temp%\SerializeXmlApp.log”
A link to the vbscript can be found Here (in .txt format to allow upload - save or rename to .vbs)
In ConfigMgr 2012 there’s a handy little node on the ‘Monitoring’ panel called ‘Content Status’ under ‘Distribution Status’. This will return all content, whether that is Applications, Packages, OS Images, Drivers etc, and the current distribution status of them.
If you click one of them, you’ll get a nice pie chart showing a summary of Failed, In Progress and Successfully distributed/verified on your distribution points.
You can then click ‘View Status’ and it will give a breakdown of the individual distribution points for that package and a description of their individual status (i.e. awaiting content, failed to validate hash etc). Here’s an example of the ‘In Progress’ and ‘Failed’ nodes for a package recently distributed:
This is great, it shows which distribution points have failed distribution, those that are in progress, and those that have succeeded, but only for that package. What if you wanted to find all packages that were in a specific distribution state, or even those that have a specific status message (i.e. all those that had a content hash mismatch, or all those awaiting prestaged content).
I couldn’t find a built-in report to show me this, or at least not one that replicated the detail that I was seeing in the console, so I decided to write my own.
The SQL below will generate a report similar to that in the screenshot below. It shows the Package ID for the content, the name and type, the target distribution point with site code, the status (Success, In Progress or Failed, and also give the option of ALL), the actual Message as displayed in the console and the number of groups the distribution point is a member of. This can then be exported (e.g. to excel) for further data manipulation and sorting/filtering.
Main SQL Query:
dbo.v_Package.Manufacturer + ‘ ‘ + dbo.v_Package.Name + ‘ ‘ + dbo.v_Package.[Version] AS [Content Name],
REPLACE(dbo.RBAC_SecuredObjectTypes.ObjectTypeName, ‘SMS_’, ”) AS [Content Type],
REPLACE(SMS_DistributionDPStatus.Name, ‘\’, ”) AS [Distribution Point],
WHEN 1 THEN ‘Success’
WHEN 2 THEN ‘In Progress’
WHEN 4 THEN ‘Failed’
END AS [Status],
WHEN 2303 THEN ‘Content was successfully refreshed’
WHEN 2324 THEN ‘Failed to access or create the content share’
WHEN 2330 THEN ‘Content was distributed to distribution point’
WHEN 2384 THEN ‘Content hash has been successfully verified’
WHEN 2323 THEN ‘Failed to initialize NAL’
WHEN 2354 THEN ‘Failed to validate content status file’
WHEN 2357 THEN ‘Content transfer manager was instructed to send content to the distribution point’
WHEN 2360 THEN ‘Status message 2360 unknown’
WHEN 2370 THEN ‘Failed to install distribution point’
WHEN 2371 THEN ‘Waiting for prestaged content’
WHEN 2372 THEN ‘Waiting for content’
WHEN 2380 THEN ‘Content evaluation has started’
WHEN 2381 THEN ‘An evaluation task is running. Content was added to the queue’
WHEN 2382 THEN ‘Content hash is invalid’
WHEN 2383 THEN ‘Failed to validate content hash’
WHEN 2391 THEN ‘Failed to connect to remote distribution point’
WHEN 2398 THEN ‘Content Status not found’
WHEN 8203 THEN ‘Failed to update package’
WHEN 8204 THEN ‘Content is being distributed to the distribution point’
WHEN 8211 THEN ‘Failed to update package’
ELSE ‘Status message ‘ + CAST(SMS_DistributionDPStatus.MessageID AS VARCHAR) + ‘ unknown’
END AS [Message],
SMS_DistributionDPStatus.GroupCount AS [Group Count]
dbo.vSMS_DistributionDPStatus AS SMS_DistributionDPStatus
INNER JOIN dbo.RBAC_SecuredObjectTypes
ON SMS_DistributionDPStatus.ObjectTypeID = dbo.RBAC_SecuredObjectTypes.ObjectTypeID
LEFT OUTER JOIN dbo.v_Package
ON SMS_DistributionDPStatus.PackageID = dbo.v_Package.PackageID
WHERE (SMS_DistributionDPStatus.MessageState = @MessageState) OR (@MessageState = 999)
ORDER BY PackageID ASC
Prompt SQL Query:
WHEN 1 THEN ‘Success’
WHEN 2 THEN ‘In Progress’
WHEN 4 THEN ‘Failed’
END AS [Status],
SMS_DistributionDPStatus.MessageState AS [State Value]
vSMS_DistributionDPStatus AS SMS_DistributionDPStatus
‘ALL’ AS [Status],
999 AS [State Value]
Note: due to the way WordPress converts apostrophes and quotation marks, it’s probable that a direct copy and paste of the SQL will not run properly, please be sure to replace apostrophes with SQL-accepted quotes.
There are some unknown message IDs that I have missed off, I will add them in as and when I come across them. Please feel free to comment with any new ones and I will amend the post accordingly.
Or if there is a table in the ConfigMgr database that contains them all, please let me know.
We ran into an issue a couple of months back where one of our secondary sites lost connectivity with it’s primary. The following was observed in the mpcontrol.log on the secondary:
Call to HttpSendRequestSync failed for port 80 with status code 500, text: Internal Server Error
Successfully performed Management Point availability check against local computer.
Initialization still in progress.
When running the Management Point Troubleshooting tool for the site, the above was confirmed when the MPLIST HTTP or HTTPS request functionality test failed as below.
Test MPLIST HTTP or HTTPS request functionality.
Detail result information:
Exception Message:Fail to retrieve the content in [HTTP://SECSCCM01:80/SMS_MP/.SMS_AUT?MPLIST].
Exception Message:The remote server returned an error: (500) Internal Server Error.
When scanning the mpcontrol.log further, the following SQL errors were also present:
*** *** Unknown SQL Error!
*** Failed to connect to the SQL Server.
Failed to get connection to the configured SQL database.
Failed to connect to the configured SQL database.
Reverting back from using the SQL connection account; user name is now ‘SYSTEM’.
Failed to get the current CLR Enabled configuration setting for the configured SQL Server hosting the database.
A similar SQL related error was also observed in the mp_getauth.log on the secondary:
MPDB ERROR – CONNECTION PARAMETERS
SQL Server Name : PRISCCM01\CCM
SQL Database Name : SMS_PRI
Integrated Auth : True
MPDB ERROR – EXTENDED INFORMATION
MPDB Method : Init()
MPDB Method HRESULT : 0×80004005
Error Description : null
OLEDB IID : null
ProgID : null
MPDB ERROR – INFORMATION FROM DRIVER
CMPDBConnection::Init(): IDBInitialize::Initialize() failed with 0×80004005
This prompted me to test whether this was a problem with ConfigMgr connecting to the primary site database, or a general server issue connecting with the database. To test this, I created a test.udl file on the desktop of the secondary, opened it and inserted the connection information for the database:
When attempting to pull down the drop-down menu to ‘select the database on the server’, the following error would appear:
[DBNETLIB][ConnectionOpen (Connect()).]Specified SQL server not found.
Login failed. Catalog information cannot be retrieved.
Testing the same settings on any other secondary site returned the full list of databases so I knew that the issue was specific to this system.
The next step was to try and diagnose what would be causing the connection issue. My first thought was authentication but after performing several tests, including connection attempts using my domain credentials that work from other sites, authentication was ruled out.
I then decided to try running a netstat -a to view open connections and port usage. The problem soon became evident at this point. The below is the result of the netstat:
SECSCCM01 TCP 10.89.15.2:60674 PRISCCM01:microsoft-ds ESTABLISHED
The connection between the secondary site and the primary site database was being attempted on the microsoft directory services port (port 445) and not on the current dynamic TCP port (49395) as below:
SECSCCM02 TCP 10.89.15.3:60674 PRISCCM01:49395 ESTABLISHED
This protocol/port issue took me to the internet where I found a registry key that control specific SQL Server client settings for that server. I navigated to the following key on the secondary site and observed the screenshot below:
For some reason, connection-specific settings had been set within this key by a program or application, that was causing connections to the primary site to fail. Comparing this key on a handful of other sites it was evident that some of the subkeys were unique to this secondary site. I exported the ‘Client’ key as a backup, so that I could revert my changes and then deleted the ‘ConnectTo’, ‘DB-Lib’ and ‘SNI10.0′ subkeys.
Within about 5 minutes, on the next connection retry interval, the logs started showing connection success, HTTP Status 200, the udl file connected successfully, netstat showed the correct port, Management Point Troubleshooting tool completed successfully and ConfigMgr reported back that everything was working again with no errors.
I have no idea what caused that key to get written but since removing, the server has not experienced any more problems.
Like many others out there, we have been using ConfigMgr to perform automated software uninstalls for applications that haven’t been used in a certain period (90 days in our case due to licensing restrictions). These work well but have many drawbacks. Any piece of software has to be set to be metered using Software Metering, in our case, 90 days prior to being able to use any of the data brought back from it. You also need to set up collections of machines that match metering criteria involving long and complicated resource-intensive queries. On top of this you need to create uninstall programs for each individual version of the programs to uninstall and finally set up adverts to remove the software on a schedule.
We have recently started using a product called AppClarity by 1E to handle our software purchase data, installation base and compliancy. It performs all of these functions fantastically and is a great software asset management tool, but in addition to this, and the feature that this post is primarily about is the software reclaim component. This allows, at administrative discretion, unused and potentially unused software to be removed from systems on a per-application basis through the use of an agentless reclaimer advertised via ConfigMgr and some intelligent algorithms on the back end.
Unlike metered software uninstalls setup in ConfigMgr, software does not have to be pre-configured to be metered, nor do any uninstall collections and queries need to be setup. On top of this, individual programs for uninstalls do not need to be setup, as AppClarity will use the native uninstall string specific to that program to perform the removal.
Sometimes however, the native uninstall string is not enough. We ran into an issue recently trying to perform reclamation of our unused Microsoft Project 2007/2010 and Visio 2007/2010 licenses. Due to some installation issue, whether because of a previous upgrade or repackaging that we’d done, when uninstalling the forementioned products, the uninstall would require the original installation media in order to remove. In our case, as these were installed via ConfigMgr, the uninstall had to be pointed back to the source media on the distribution point. The native uninstall string used by AppClarity was failing to remove these products for this reason, however the newest release of AppClarity at time of posting (2.1) comes with a feature to overcome this obstacle – Custom Command Lines. This allows the addition of shell-based command strings that will be attempted in addition to the native uninstall string during uninstallation. We were able to overcome the obstacle of uninstalling software that required source media on the distribution point by performing the following steps, the principal being to use environment variables to direct to the referenced location.
The Custom Command Lines accept environment variables (such as %TEMP% and %PATH% etc) to direct commands so we created the following vbscript to set environment variables to the location of the packages. Because the local distribution point from which to retrieve content is queried on-demand by the execution engine at runtime and is not stored in WMI or registry, we needed to get this from the execution of the script. In the event of there being multiple package directories on the distribution point and the fact that the script may not be run from the same one as the source media required, multiple environment variables are set to cover all existing locations.
The script is run just by calling the below .vbs, or can be called with the /remove switch which will delete all uninstallation path environment variables previously set, as part of a cleanup program.
On Error Resume Next
Set oArgs = WScript.Arguments
Set oShell = CreateObject("WScript.Shell")
set oFSO = CreateObject("Scripting.FileSystemObject")
Set oEnv = oShell.Environment("System")
If oArgs.count = 0 Then
If Trim(LCase(oArgs(0))) = "/remove" Then
sCurrentDir = oFSO.GetAbsolutePathName(".")
sPackageDir = LEFT(sCurrentDir, InStrRev(sCurrentDir,"")-1)
oEnv("UNINSTPATH1") = sPackageDir
sDrive = Left(Right(sPackageDir,2),1)
intDriveCount = 2
For i = asc("A") To asc("Z")
If UCase(chr(i)) <> UCase(sDrive) Then
StrPath = Left(sPackageDir,Len(sPackageDir)-2) & chr(i) & "$"
If oFSO.FolderExists(strPath) Then
oEnv("UNINSTPATH" & intDriveCount) = strPath
intDriveCount = intDriveCount + 1
Set oEnv = oShell.Environment("System")
For i = 1 to 26
oEnv.Remove("UNINSTPATH" & i)
The next stage was to advertise this out to all non-server systems that would be targeted for software reclaim. This needed to be run from distribution point so as to return the content location to be stored in the environment variable(s). A recurrence schedule was set to make this run every day during the day. It takes just a couple of seconds to execute.
Because this needed to be run with administative permissions, it would run under the local system context. The one drawback of this is that environment variables set using the local system account do not take effect until after a reboot has been performed (http://support.microsoft.com/kb/821761). Therefore by running this daily, any machine would only be at minimum 1 day out (assuming a reboot is performed), should they for some reason move to a different site.
Once the package/program and advert were set up, the only thing that remained was to set up the Custom Command Lines in AppClarity and deploy the reclaimer on a schedule some time after the environment variable script was run. Because none of our 80 DPs have more than 3 package directories, only three custom commands were necessary per program, however more could be added if required. The below shows an example of the command lines set up to remove Project Professional 2010 (Package ID CEN00001)
"%UNINSTPATH1%\CEN00001\Setup.exe" /uninstall prjpro /config PrjPro.WW\PrjPro_Uninstall.xml
"%UNINSTPATH2%\CEN00001\Setup.exe" /uninstall prjpro /config PrjPro.WW\PrjPro_Uninstall.xml
"%UNINSTPATH3%\CEN00001\Setup.exe" /uninstall prjpro /config PrjPro.WW\PrjPro_Uninstall.xml
The process was tested and within the first day, the total number of uninstalls for Project and Visio went from over 110 uninstallation failures and 0 successes, to over 80 successes, 2 failures with many more still pending removal.
Example uninstallation information from AppClarity log file for a Visio 2010 Uninstall:
Will attempt to uninstall ‘Microsoft Office Visio 2010′.
Attempting uninstall with command line ‘”%UNINSTPATH1%\CEN00001\Setup.exe” /uninstall visio /config Visio.WW\Vis2010_Uninstall.xml’.
Beginning uninstall using executable ‘%UNINSTPATH1%\CEN00001\Setup.exe’ with parameters ‘/uninstall visio /config Visio.WW\Vis2010_Uninstall.xml’. This may take a few minutes.
Process finished with exit code 0.
Application uninstalled successfully.
Sending uninstall state to service.
All in all, AppClarity has saved us tens of hours of work and many headaches by simplifying the uninstall process and providing multiple methods of achieving the desired results.
On two occasions when trying to upgrade or rebuild systems using USMT 4, the task sequence has failed during the user state capture phase with the following error showing up in the advertisement status messages for the OSD advertisement on the site in question:
The task sequence execution engine failed executing the action (Capture User State) in the group (State Capture) with the error code 2147944212
The task sequence execution engine failed execution of a task sequence. The operating system reported error 2147944212: The specified image file did not contain a resource section.
The actual text given above isn’t very helpful and doesn’t relate to the actual problem that we encountered.
The problem was that a couple of our client systems, had not been updated to SP2+ of the ConfigMgr 2007 client. The installed version was 4.0.6221.1000 (SP1). USMT 4.0 indirectly requires at least SP2 of the ConfigMgr client. After upgrading the client, the capture process completed successfully.
MDT Location-based Deployments for Software Installation based on Gateway address is a fantastic way to perform region-specific installs. However, if you need to specify more criteria (for example; architecture for x86/x64 specific applications or machine type such as applications that should only be installed on laptops or desktops) then this is where MDT falls down and you have to build application installs into the OSD task sequences themselves.
This is fine, it’s very easy to add application installs into the task sequence and there’s very powerful logical conditions that can be built to perform all necessary filtering required. The problem is that all packages referenced anywhere in the task sequence need to be available on all Distribution Points for any location that wishes to use OSD.
Even if you have a WMI filter such as: SELECT ClientSiteName FROM Win32_NTDomain WHERE Description = ‘DOMAIN’ AND ClientSiteName LIKE ‘%AD_SITE_NAME%’, as this query is not performed until runtime when directly accessed from the running system, the package is still required on the DP before even being allowed to launch the TS.
It is therefore a necessity to ensure all referenced packages are available on all DPs, and this can be a challenge to keep up to date with, especially if there are frequent updates to packages, or modifications to the task sequence, or new sites being added. The easiest way I’ve found to keep on top of package references on all DPs is to use a query (which could be made into a report) which will show the number of distribution points that contain all referenced packages for a specified task sequence. This is shown below.
SELECT derPackageTbl.ReferencePackageID AS [Package ID], derPackageTbl.Name, COUNT(dbo.PkgStatus.ID) AS [DP Count]
(SELECT DISTINCT TOP (100) PERCENT tsr.ReferencePackageID,
‘[' + tsr.[ReferenceName] + ‘] ‘ + tsr.[ReferenceProgramName] + ’ [' + tsr.[ReferenceVersion] + ‘]’ AS [Name]
FROM dbo.v_TaskSequenceReferencesInfo AS tsr INNER JOIN
dbo.v_TaskSequencePackage AS tsp ON tsr.PackageID = tsp.PackageID
WHERE (tsp.Name = ‘Task_Sequence_Name’)
ORDER BY tsr.ReferencePackageID) AS derPackageTbl LEFT OUTER JOIN
dbo.PkgStatus ON derPackageTbl.ReferencePackageID = dbo.PkgStatus.ID
WHERE (dbo.PkgStatus.PkgServer NOT LIKE ‘%display=%‘)
GROUP BY derPackageTbl.ReferencePackageID, derPackageTbl.Name
This generates output such as the following:
It is thereby evident that two packages are missing off one and two distribution points respectively. We would then go into that package and add the missing DPs.
Configuring SQL Reporting Services for ConfigMgr is pretty simple, when it works. We ran into a problem the other week however, where SRS would not configure or allow the importing of reports on a couple of our reporting servers. There are lots of websites that contain similiar issues and the resolutions that I ended up implementing, but I was unable to find them all in one place and they weren’t specific to ConfigMgr so I’m posting here with some screenshots to collate all the information that I found.
Using Reporting Services Configuration Manager I was able to configure the Report Server to my desired configuration. I was able to add the ConfigMgr Reporting Services point role to my site system and the wizard completed successfully. However, in the log files I observed the following:
SMSSRSRP Setup Started….
Parameters: D:\MICROS~1\bin\i386\ROLESE~1.EXE /install /siteserver:CENSCCM01 SMSSRSRP
Installing Pre Reqs for SMSSRSRP
======== Installing Pre Reqs for Role SMSSRSRP ========
Found 0 Pre Reqs for Role SMSSRSRP
======== Completed Installion of Pre Reqs for Role SMSSRSRP ========
Installing the SMSSRSRP
Chart control detected. Invoking process “D:\Microsoft Configuration Manager\bin\i386\DUNSETUP.EXE”
Chart control setup D:\Microsoft Configuration Manager\bin\i386\DUNSETUP.EXE failed with return code 0×00000001. Installation cannot continue.
Microsoft.ConfigurationManagement.Srs.SrsServer. error = Unspecified error
STATMSG: ID=7402 SEV=E LEV=M SOURCE=”SMS Server” COMP=”SMS_SRS_REPORTING_POINT” SYS=CENSCCM01
Failures reported during periodic health check by the SRS Server CENSCCM01. Will retry check in 57 minutes�
Waiting for changes for 57 minutes�
Site Status > [Site] > Component Status > SMS_SRS_REPORTING_POINT:
SMS SRS Reporting Point failed to monitor SRS Server on “CENSCCM01″.
When attempting to ‘Copy Reports to Reporting Services’ however, the boxes would be greyed out and it would briefly read “Retrieving report object from the server … ” followed shortly by “Error connecting to report server ‘CENSCCM01′”.
After searching online for reasons for all of these behaviours I read several posts and articles that made reference to the requirement of a default SQL instance. Upon comparing those reporting servers that were working with those that were not working, it became evident that this was the issue.
The reporting servers that were working had a default instance (MSSQLSERVER) installed, whereas those that weren’t, had a named instance (CCM) instead. This was reflected in the instance selection menu in the Reporting Services Configuration Connection box when launching Reporting Services Configuration Manager.
For those servers that just had a named instance installed (CCM) and didn’t have the option of the default instance, I had to install it.
On the servers in question, I went to Programs and Features, selected the SQL Server installation (in my case ‘Microsoft SQL Server 2008 (64-bit)’) and selected Uninstall/Change.
I selected ‘Add’ and then when prompted, selected my install files from the software repository on our network. I then followed the wizard, selecting the below values on each page.
- Setup Install Files > Install
- Setup Support Rules > Next
- Installation Type > Perform a new installation of SQL Server 2008 > Next
- Product Key > Enter the Product Key [Product Key] > Next
- License Terms > I accept the license terms > Next
- Feature Selection > Database Engine Services; Reporting Services > Next
- Instance Configuration > Default Instance > Next
- Disk Space Requirements > Next
- Server Configuration > Use the same account for all SQL Server services, account name: NT AUTHORITY\SYSTEM > OK > Next
- Database Engine Configuration > Windows Authentication Mode > Add… > [User Accounts for Admin] > OK > Next
- Reporting Services Configuration > Install the native mode default configuration > Next
- Error and Usage Reporting > Next
- Installation Rules > Next
- Ready to Install > Install
After installing the default instance and configuring it in Reporting Services Configuration Manager, I was then able to import all reports. For those servers that I’d already attempted to configure on the CCM instance, I had to go in and change the Web Service URL TCP port to something different on the CCM instance in order to set this up on the default instance.
Also, as I was running SCCM 2007 R3, I had to apply hotfix KB2449910 [Link] in order to create new reports using SRS. Before applying the hotfix, the management console would crash with the error: System.Collections.Generic.KeyNotFoundException: The given key was not present in the dictionary.
My Reporting Services Configuration is as follows:
- Service Account: Use Built-in account > Local System
- Web Service URL:
- Virtual Directory: ReportServer
- IP Address: All Assigned (Recommended)
- TCP Port: 80
- SSL Certificate: (Not Selected)
- SSL Port: Blank
- SQL Server Name: CENSCCM01
- Database Name: ReportServer
- Report Server Mode: Native
- Credential: ServiceAccount
- Login: LocalSystem
- Password: ***********
- Report Manager URL: Reports
- E-Mail Settings:
- Sender Address: <Administrative E-Mail>
- Current SMTP Delivery Method: Use SMTP Server
- SMTP Server: <SMTP Server>
- Execution Account:
- Specify an Execution account [Ticked]
- Account: <Service Account>
- Password: *******
- Encryption Keys: N/A
- Scale-Out Deployment:
- Server: CENSCCM01
- Instance: MSSQLSERVER
After the most recent (February) security updates released by Microsoft, we noticed an issue with one of the downloaded updates (namely: Cumulative Security Update for Internet Explorer 7 for Windows XP (KB2482017) – although this isn’t really relevant). At the same time, we also noticed the same issue with an update dating back to October 2010. This update was one for .Net Framework 4.0 and I presume we only started noticing this now as .Net 4 is becoming more of a necessity on machines, and therefore the install base is higher than it was back in October.
Certain clients would attempt to download the update but would get stuck installing it. The installer would get to 66% and then just sit there. The timer would increment indefinitely and no further messages would be logged in any of the log files. No error messages seemed to be logged anywhere on the system. The Software Updates window looked like the below.
I checked that the updates were available on all Distribution Points and that the DP in question had all the IIS BITS exceptions set to allow transfer and everything looked fine. I then checked the built-in Compliance report 9 (Report ID 171) - ’Computers in a specific compliance state for an update’, selected the Security Update for Internet Explorer 7 article, and checked for machines in the ‘Update is installed‘ state.
This brought back a bunch of machines from Hong-Kong and not much else, there were a few machines from other regions/offices but 95% was from Hong-Kong. This confused me slightly as there was nothing infrastructure-wise that would affect only machines in HK. The problem soon became evident however when looking in the ConfigMgr console.
I expanded Update Lists and selected the most recent update list which contained the IE7 update in question. I went into the Properties for the update and went to the Content Information Tab. Scrolling the window showed the below.
The English language version of the update had not been downloaded, whereas the Chinese language version had. Therefore those machines in Hong-Kong with the Chinese version installed, received and installed the update fine, whereas machines in the rest of the world that use the English version had not. Retrospectively checking the other machines in the report that were not in Hong-Kong revealed that the update had been installed manually.
I checked the same tab for the .Net Framework 4 update and noticed that although there are not multiple languages, one component of the update had not been downloaded.
I went to the Software Updates node, redownloaded both updates into their original update list, updated the existing update package and then redistributed to the DPs. As soon as the update reached the DPs, those Software Updates that were stuck at 66% completed their download immediately, began installing, then reported back as completed.
Our ConfigMgr infrastructure is quite geographically dispersed, with many site offices and site servers in remote parts of the world, often with poor speed LAN and WAN links. This often leads to a massive backlog of active package distributions as packages are being transferred between sites.
ConfigMgr ships with a report to enumerate active package transfers – All active package distributions (Report ID: 136) but this is basically just a query on the v_DistributionPoint, v_Package and v_PackageStatusDistPointsSumm views in the database returning all package records where the distribution state is not 0.
This is great; it’s very helpful to know which packages have been requested to be transferred to the distribution points, but that’s about it. It doesn’t tell you any useful information such as which ones are actively transferring, how much data has been transferred, how much is remaining etc. If any packages get ‘stuck’ they will remain in the report and there’s no way of knowing whether they are still transferring or whether they have fallen into the transfer abyss.
The only place to locate this kind of information is in the sender.log file on the site server sending the package. Looking at the sender.log for more than about 30 seconds during an active package distribution is enough to give me a headache though, never mind trying to interpret it. I therefore decided it would be a good idea to write a simple application that would read in and interpret the sender.log and display all the details in a nice legible format.
The below application is the result of that line of thought. I have made it available in the Downloads section [Link] in case anyone else suffers the same package distribution frustrations that I do, and I pray that Microsoft develop such a utility, or incorporate it into the interface for ConfigMgr 2012.
In a New Build scenario, where a machine is PXE booted to initiate an OS deployment, there’s an option in the PXE service point properties on each PXE-enabled site system to specify a password required for computers to boot using PXE. This is very convenient as it enables us to advertise the New Build task sequence to all machines, put in a check to ensure that the TS is running in WinPE using the _SMSTSinWinPE = True task sequence variable and this prevents (a) users being able to kick off a full rebuild from within windows and (b) users being able to PXE boot their own machine and kick it off without the password.
What happens however, if you want to be able to advertise a Refresh scenario build to all machines so that a machine can be upgraded or rebuilt using a USMT task sequence without having to add individual machines into a collection on an ad-hoc basis. Luckily the solution was very simple.
I wanted to replicate the password functionality of PXE boot builds, so I created a very simple 6-lines of code program to pop up a password box. If the correct password was entered, it would exit with code 0, if an incorrect password was entered, or the user closed the application using the X, it would exit with code 1. I packaged this up and sent it out to my DPs.
I then went to my task sequence, right-clicked and went to Properties, then navigated to the ‘Advanced’ tab. There is then an option to Run another program first, just like in package/program chains. I selected my Password prompt application as below:
I could then advertise my task sequence to any machine I wanted, and when I ran the advert, I would be presented with the following dialogue box:
If I then enter the incorrect password or close using the X, it fails with exit code 1, which in turn fails the parent program (the task sequence). If I enter the correct password, it returns code 0 and continues to execute the rebuild process.
The good thing about doing it this way is that I can also advertise the same task sequence to my New Builds collection. When machines PXE boot, they just receive the task sequence and not the pre-TS password application, so they can boot as normal, but still get the password box prompted by the PXE Service point. I now have a single architecture task sequence that installs 64-bit Windows 7 on machines with an x64 capable processor, 32-bit Windows 7 on those without, that will run a new-build scenario when PXE booted, and a refresh with USMT and Package Mapping scenario when run within Windows, after entering the correct password. All thanks to just 6 lines of code.