Restart notification behaviour for software and patch deployments

Quite some time ago I had the need to document the restart notification behaviour of normal software deployments, software deployments within a task sequence and patch deployments. I wrote it all up in a draft blog post but never actually published it…..well here it is!

The reason I originally researched this is because we had a number of important users who got different restart notification messages depending on what they were installing and this became quite confusing! If they installed a patch they might get an 8 hour grace period before restarting but if they installed software (which unknown to them was done within a task sequence), they might get as little as 1 minute warning prior to a reboot.

It became important to document the behaviour, not just for our users, but our admins as well. Below are the results of the tests.

Normal Software Deployment and Patch Deployment

I’m sure most of you know that this is managed from the Computer Client Agent Properties under the Restart tab. The first option being the initial countdown, in minutes, once the patch or software has installed and a reboot is required and the second option being the number of minutes, prior to the forced reboot, a final notification will be shown to the user.

CCAProps

Software Deployment via Task Sequence

Things become a little more complex when you start using task sequences to deploy software and that’s because the above options no longer apply.

There are normally 2 reason why a reboot would take place during a running task sequence – when an application, that is being installed as part of an Install Software task sequence step, returns a 3010 exit code or when you specifically use a Restart Computer task sequence step.

If the reboot occurs due to the 3010 exit code, there will only be a 1 minute warning prior to the machine being force rebooted. I’ve not checked too intensely but I’m not sure if there is any way to change this setting, except to suppress the 3010 code in a script and do a controlled reboot using the Restart Computer task sequence step, mentioned below.

If the reboot occurs due to the use of the Restart Computer task sequence step, you are able to specify exactly how many seconds the notification message will be shown to the user before the machine is rebooted, in the step itself. The default is only 60 seconds and the max is 9999 seconds (about 2 and a half hours).

RCTSStep

It is also possible to increase the value to above 9999 seconds, if required, by using a task sequence variable called SMSRebootTimeout. You simply need to add the Set Task Sequence Variable step to the start of your task sequence, use the SMSRebootTimeout variable and set a value in seconds. This then overwrites whatever number is used in the Restart Computer task.

Below shows me increasing the timeout to 8 hours:

TSVar

TSNotification

If you want this restart value to apply to all task sequences that use the Restart Computer step, without manually changing each task sequence, you can also do this by adding the variable into the Collection Variables tab, under the Modify Collection Settings option on any collection. This will ensure that any machine in that collection, running a task sequence, will adhere to the value in the variable. If you need to overwrite the top level collection variable, you can also do this by adding the variable directly into the task sequence again and it should do so.

CollVar

Hope that helps
Nik

Deploying modern apps to a 64 bit Windows 8 machine using SCCM 2007

Firstly, this is simply a post aimed at helping people deploy modern apps to Windows 8 machines. It doesn’t touch on actually creating the apps and getting them signed – that’s another matter and one which I don’t get involved in!

Secondly, before you do anything in SCCM, you need to ensure that you have turned on sideloading on the Windows 8 machine. Enabling this setting allows the installation of trusted app packages that do not originate from the Windows Store. To do so follow the below instructions:

SideLoadingPolicy

So I was asked to help test deploying a custom modern app (via a Powershell script), to a 64 bit Windows 8 desktop, using SCCM 2007. All in all it was a fairly simple task but there were one or two issues we had to overcome, which might be of help to someone else out there.

I started the test by getting the source files (which included both a signed ps1 and appx file), creating a package and program in SCCM 2007 before deploying them to a test collection using an optional advert, much like any normal application. I did ensure that the program setting to “Allow users to interact with this program” was checked as I wanted to see any possible errors that would appear on screen.

When I first manually ran the advert I got the following error:

TestModernApp.ps1 cannot be loaded because running scripts is disabled on this system

This was a strange error as I was using a signed ps1 file and had already set the execution policy of Powershell on my test Windows 8 machine to AllSigned – http://technet.microsoft.com/en-us/library/ee176961.aspx – to ensure I don’t get any Powershell errors. After a bit of digging it turns out that on 64 bit machines there are two instances of Powershell, a 32 bit and a 64 bit version, both with separate execution policies. When I searched for Powershell in the Start screen and opened it as admin, it opened the 64 bit version only and so I had to run the same command on the 32 bit version as well. To do this I went to %windir%\SysWOW64\WindowsPowerShell\v1.0, opened up Powershell.exe as admin and ran the same set-ExecutionPolicy AllSigned command. I then ran the advert manually again and it no longer errored at this point.

If you are getting problems at this point, I suggest you set the execution policy of both instances of Powershell to Unrestricted but be aware of the possible security implications of allowing any ps1 script from running.

Unfortunately though, after fixing this issue another error then occurred:

add-appxpackage : Deployment failed with HRESULT: 0x80073CF9, Install failed.

Deployment Add operation rejected on package TestModernApp.appx install request because the Local System account is not allowed to perform this operation.

It is stating that, as I have set the SCCM program to “Run with administrative rights”, its using the SYSTEM account  to install the application, which is obviously causing a problem. With modern apps though, what needs to be remembered is that when they install, they do so on a per-User basis, unlike normal apps that install to the device. As this is the case it means that users don’t need to be local administrators to install modern apps. Therefore you need to set the program to “Run with user’s rights”:

Capture

After changing the program to the above, the modern app installed with no issues.

appxInstall

Staging Problems: BitLocker and multiple boot images

Over the last few days I’ve been working on setting up staging via PXE booting at a clients site. Everything seemed to be working fine until I tried to stage a machine with BitLocker installed and the hard drive encrypted.

Just after I selected the task sequence and started the process, as it was downloading the custom boot image, the task sequence error (0x80070070) occured. As it errored, I hit F8 to bring up the command prompt and took a look at the SMSTS.log, which showed:

There is not enough disk space left on this machine for staging the content for content PKG000ID

and

Boot Image package not found. There is not enough space on the disk. (Error: 80070070; Source: Windows)

This seemed very strange as I knew that the machine has plenty of space available on the HDD. I ran diskpart from the command prompt and saw that the BitLocker partition, which is 100MB in size, was set to the C:\ drive and the main, large, encrypted partition was set to the D:\.

I then noticed it was creating the _SMSTaskSequence folder on the C:\ and it all made sense. Further up in the log, I found the following:

Volume C:\ has 75358208 bytes of free space 
Volume D:\ has unsupported file system 
Volume X:\ is not a fixed hard drive 
TSM root drive = C:\ 

The X:\ was the current boot image, loaded into RAM. As the D:\ was the encrypted partition, it was inaccessible and so SCCM was using the 100MB BitLocker partition as the root drive and was attempting to copy the boot image to the C:\. This partition wasn’t big enough to hold it and so the TS failed. Simple right?

But hang on, why was it downloading the boot image at all? I am already in a pre-execution environment, why didn’t it just use this boot image? Well, when you first PXE boot a machine it will download a boot image from the PXE DP.

BootImage

The boot image it downloads initially will depend on which one is associated with the task sequence that has most recently been advertised to the machine that is being PXE booted. Once the environment is fully loaded, you then get the chance to choose one of the advertised task sequences and if that TS uses a different boot image to the one loaded in RAM, it will need to download the new one to HDD. If the main partition is encrypted, it is then forced use the BitLocker partition and if that is too small, it will fail with the above error!

What about fixes or workarounds? Well lets just recap the scenario before we look at what can be done.

Scenario:
A BitLocker encrypted machine, with a BitLocker partition that is smaller than the boot image size, downloads one of the available boot image media at PXE boot. When the choice of task sequences to run shows up, a TS that uses different boot image to the one already downloaded, is selected. As a new boot image is required, it begins to download it but to the small BitLocker partition (as the main drive is encrypted and not accessible) and fails due to lack of space.

Workaround:

  1. Limit the number of boot images on the PXE DP to one, so the initial boot image download during PXE boot is the same as the one used in the TS about to run. This of course means only task sequences with the same boot image can be advertised to machines.
  2. Suspend the encryption on the machine prior to PXE booting and then the number of boot images on the PXE DP doesn’t matter
  3. Increase size of BitLocker partition so it can download the new boot media (I haven’t actually tested this workaround but am confident it would work)

Other than this, there is no other way I’m aware of getting round the issue except formatting the HDD prior to PXE booting!

Thanks
Nik

Task Sequence Media Wizards fails with 0x80091007

Today I was asked to look into a problem where the Task Sequence Media Wizard would fail to create stand-alone, task sequence media on a local install of the SCCM console. The failure is below:

CTSM1

The CreateTSMedia.log showed:

Staging package XXX00306        
Before executing state – fsVolumeSpaceRemaining= 12527 Mb 
Staging package XXX00306
Hash could not be matched for the downloded content. Original ContentHash = 78DF6C04DD1B6CDBD4F163908301C0CC5C5E3E70, Downloaded ContentHash =                
Failed to stage package XXX00306 (0x80091007)
Failed to create media (0x80091007)
CreateTsMedia failed with error 0x80091007, details=”XXX00306″

Now I’m sure most people reading this blog know that hash errors are generally fixed by refreshing the package on the DP or at the very worst, removing the package from the DP then re-adding it.

In this instance though, that didn’t work and I also noticed that the downloaded content hash didn’t have a value at all – it was blank:

CTSM2

This is very strange as if it was the usual problem, it would at least have a hash value for the downloaded content. This then had me thinking that in this case the problem might actually be that the software couldn’t be included in the media because it wasn’t able to download the package to the machine at all. When another admin tried and was able to complete the wizard, using the same DP, with no problem at all, this immediately led me to the conclusion that it was a permission issue.

It turns out the problem was actually related to the Access Accounts settings for the package itself. For security reasons the default permissions had been changed to replace the Users account with a AD group (which the Network Access Account we use is a member of):

CTSM3

As the admins who were having a problem weren’t an Administrator on the DP or a member of the group, the package couldn’t download and the hash check failed (as there was nothing to check!). When I added the default User account back in and refreshed the package on the affected DPs, the wizard completed successfully.

What this shows, I guess, is that the Network Access Account isn’t used when creating task sequence media – something to watch out for.

Thanks
Nik

2012 in review

The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

4,329 films were submitted to the 2012 Cannes Film Festival. This blog had 20,000 views in 2012. If each view were a film, this blog would power 5 Film Festivals

Click here to see the complete report.

ConfigMgr client problems downloading packages that include .aspx pages

Recently I have looked into an issue with package downloads freezing half way through the download and never complete. This is normally a fairly simple process of checking in the IIS logs for what file is stopping the download and then changing the applicationhost.config file to add the file extension, under requestFiltering section and set it to true (or simply change allowUnlisted to be true and remove all the pre-defined extensions completely). Unfortunately for this particular file, with a .aspx extension, this method didn’t work.

This became quite frustrating but after a bit of reading it looks like that it is actually a limitation of BITS, which is by design. The problem is that by default, IIS passes requests for certain file types to be serviced by ASP.NET. Files with extensions such as .aspx, .asmx and .ashx are mapped to the ASP.NET ISAPI extension (Aspnet_isapi.dll) which means the files are never downloaded by BITS.

Initially I was very annoyed as it seems pretty ridiculous it can’t be changed and would mean us having to edit a number of our packages. Thankfully, there is a work around though.

Within IIS, under Application Pools, there should be a SMS Distribution Points Pool and the Managed Pipeline setting will be set to ‘Integrated’. I changed this setting to ’Classic’ as the problem affects, among others, managed handlers in IIS 7.0 that are running in Integrated mode. Once we changed this setting, then ran an IISRESET, the files downloaded fine.

Obviously, you don’t want to do this on all your sites manually, so you can also this below script:

%WINDIR%\system32\inetsrv\appcmd set apppool “SMS Distribution Points Pool” /managedPipelineMode:”Classic”
IISRESET

Finally, this shouldn’t cause any other problems as ConfigMgr 2007 was designed to work with IIS 6, where only the ’Classic’ mode was available.

Thanks
Nik

Controlling concurrent package distribution in SCCM

I normally only write blog posts about subjects that aren’t covered particuarly well on the web or that seem, on the whole, to be misunderstood and I think this area is certainly one of them.

Recently we had some issues on one of our primary SCCM site servers, that has around 70 secondary sites, which caused a huge backlog of packages, over 3000, waiting to install. Of the many different packages that were waiting to be sent, one package was over 10GB in size and two were nearly 4GB. As you can imagine sending this much data, to 70 different sites, was taking a long time and as new packages were being distributed all the time the queue was only getting longer. Now obviously if we had left the sending capacity as default it would have eventually cleared the backlog, once the large packages had finally sent, but it was starting to affect operations as software waiting to go live couldnt, due to waiting for distribution. So we decided to crank up sending capacity.

Now I’ve always assumed the way to do this was from Site Settings > Component Configuration > Software Distribution > Distribution Point tab and there is the “Concurrent distribution settings” options, which allow you to change “Maximum number of packages” and “Maximum threads per package”.

What I have done in the past is put the “Maximum number of packages” to 7 (which is the maximum) and set “Maximum threads per package” to 10, as there is no real guidance as what this should be set at (the max is 50). Now at this point I just assumed the amount it sent would increase and think nothing else of it but this time I wanted to monitor what was being sent and to what secondary sites.To do this I used a small application that a friend of mine developed called Sender Analyser (http://www.danrichings.com/?p=90) that reads the sender.log and inteprets it to give you a list of active sending jobs. Its much easier than trying to read the sender.log, I promise you!

The results of the Sender Analyser showed the following:

and the sender.log showed this:

Both show that the Primary server was only sending 5 (out of a maximum of 5) packages to the secondary sites at any one time, even though I set both “Concurrent distribution settings” values to 7 and 10. This started me thinking whether these settings I’d changed had anything to do with site to site distribution of packages at all but were actually settings designed to control the distribution of packages to local or remote DP’s of its own site only.

To check I decided to have a look at the distmgr.log of my primary site and the first thing I noticed was this:

So the distmgr.log was allowed to process up to 7 packages at a time, exactly what I had set the”Maximum number of packages” value to. To test my theory I changed the same setting to 6 and the distmgr.log responded by showing “Used 0 out of 6 allowed…” exactly like I thought it would.

So what does this mean? Well the SMS_DISTRIBUTION_MANAGER component is mainly used to uncompress the compressed package files that are sent to it from its parent site into proper package format that can be downloaded by the clients. If the primary or secondary site has multiple servers setup as DPs in its own site, either locally or remote, this is the component in charge of copying these package files from the site server to the DP. The “Concurrent distribution settings” are the way you control how many packages and how many threads (I’m not sure if threads apply to concurrent number of servers or processor threads) it can handle at one time. What seems clear to me is that it doesn’t affect the way that packages are sent from a parent site to a child site.

So, you may be asking, how do you increase the amount of concurrent packages a parent site can send to its child sites from the, apparent, default of 5? Simple. All you need to do is change the properties for your Sender.

By going to Site Settings > Senders > Standard Sender > Advanced tab. Here is the “Maximum concurrent sendings” options, which allow you to change the maximum number of objects that can be sent to “All sites” or “Per site”:

As you will see the default for “All sites” is 5 and in our previous sender.log we saw the max sending capacity was also 5. Coincidence? No I didnt think so either, so I started increasing the “All sites” value slowly and seeing if it matched what the sender.log told me, eventually increasing it to 220. The sender.log showed this:

This quite clearly showed that the values in the “Maximum concurrent sendings” setting and the sender.log matched and this was backed up by the Sender Analyser.

The sender was now able to send nearly 50 times as many packages as the default amount and its fair to say that my package backlog didnt take long to clear! I eventually settled on “All sites” to 200 and “Per site” to 20 but the all sites capacity rarely got above 100, I guess there is something else causing the bottleneck, still that was plenty enough. Please note that I was able to use these high values due to the spec of my server, my LAN and WAN capacities, DON’T ASSUME YOU CAN USE THE SAME. I strongly recommend speaking to your network team first and slowly raising the values until you are happy.

So there we have it, if you are looking to increase the capacity to send more packages to your child sites, make sure you change the right setting 🙂