Quantcast
Channel: Robin CM's IT Blog
Viewing all 192 articles
Browse latest View live

PowerShell: Live migration of all VMs from one host to another

$
0
0

Simple script that uses live migration to move all VMs from one Hyper-V host to another. I’m not using shared storage so have added the -IncludeStorage option to the end of the Move-VM command.

You should obviously have previously enabled live migration properly for your hosts before running this script.

$SourceHost = "HyperV01"
$DestinationHost = "HyperV02"
$VMsOnHost = Get-VM -ComputerName $SourceHost

foreach ($VM in $VMsOnHost){
   Write-Host ("Moving "+$VM.Name)
   Move-VM -ComputerName $SourceHost -Name $VM.Name -DestinationHost $DestinationHost -IncludeStorage
}

Note that the above will assume that the path to the VM configuration and disk files is the same on both the source and destination hosts, e.g. D:\VMs



Free Windows Server 2012 Hyper-V networking eBook

PowerShell: Remote Desktop Connections and NLA

$
0
0

There seem to be no available cmdlets to change the settings in the Remote Desktop section of the Remote tab in the System Properties dialogue box:

Remote Desktop dbox

Namely to switch between Don’t allow remote connections to this computer and Allow remote connections to this computer. Then, on selecting the latter, to control Allow connections only from computers running Remote Desktop with Network Level Authentication.

Luckily you can do this with two WMI objects from within the root\CIMV2\TerminalServices namespace:

The script below allows you to set all the options on either a remote or local computer. To change the local computer set the ComputerName parameter to localhost or just a full stop (.).

param([string]$ComputerName = "", [int]$RDPEnable = "", [int]$RDPFirewallOpen = "", [int]$NLAEnable = "")

# $RDPEnable - Set to 1 to enable remote desktop connections, 0 to disable
# $RDPFirewallOpen - Set to 1 to open RDP firewall port(s), 0 to close
# $NLAEnable - Set to 1 to enable, 0 to disable

if (($ComputerName -eq "") -or ($RDPEnable -eq "") -or ($RDPFirewallOpen -eq "") -or ($NLAEnable = "")){
   Write-Host "You need to specify all parameters, e.g.:" -ForegroundColor Yellow
   Write-Host " .\RemoteConnections.ps1 localhost 1 1 0" -ForegroundColor Yellow
   exit
 }

# Remote Desktop Connections
$RDP = Get-WmiObject -Class Win32_TerminalServiceSetting -ComputerName $ComputerName -Namespace root\CIMV2\TerminalServices -Authentication PacketPrivacy
$Result = $RDP.SetAllowTSConnections($RDPEnable,$RDPFirewallOpen) # First value enables remote connections, second opens firewall port(s)
if ($Result.ReturnValue -eq 0){
   Write-Host "Remote Connection settings changed sucessfully"
} else {
   Write-Host ("Failed to change Remote Connections setting(s), return code "+$Result.ReturnValue) -ForegroundColor Red
   exit
}

# NLA (Network Level Authentication)
$NLA = Get-WmiObject -Class Win32_TSGeneralSetting -ComputerName $ComputerName -Namespace root\CIMV2\TerminalServices -Authentication PacketPrivacy
$NLA.SetUserAuthenticationRequired($NLAEnable) | Out-Null # Does not set ReturnValue to 0 when it succeeds and we don't want to see screen output to pipe to null
# Recreate the WMI object so we can read out the (hopefully changed) setting
$NLA = Get-WmiObject -Class Win32_TSGeneralSetting -ComputerName $ComputerName -Namespace root\CIMV2\TerminalServices -Authentication PacketPrivacy
if ($NLA.UserAuthenticationRequired -eq $NLAEnable){
   Write-Host "NLA setting changed sucessfully"
} else {
   Write-Host "Failed to change NLA setting" -ForegroundColor Red
   exit
}

I should probably make this into a cmdlet (or maybe Microsoft might have cared to do that for us all in the first place…!).


SCVMMService crashes on VM creation

$
0
0

I’m using a PowerShell script to create a new VM, based on the code you can get System Center Virtual Machine Manager 2012 SP1 to spit out. I made a change to the script to try to use some unattended settings to set the Windows Registered Owner and Registered Organisation (that end up in HKLM\Software\Microsoft\Windows NT\CurrentVersion\RegisteredOwner and HKLM\Software\Microsoft\Windows NT\CurrentVersion\RegisteredOrganisation respectively).

The code I’m using is the following:

Write-Host "Build new temporary template..."
New-SCVMTemplate -Name $TempTemplateName -Template $Template -HardwareProfile $HardwareProfile -ComputerName $NewVMName -Domain "rcmtech.co.uk" -DomainJoinOrganizationalUnit "ou=Build,ou=Servers,dc=rcmtech,dc=co,dc=uk" -DomainJoinCredential $DomainJoinCredential | Out-Null

Write-Host "Get new temporary template object..."
$TempTemplate = Get-SCVMTemplate -VMMServer $SCVMMServer -Name $TempTemplateName

Write-Host "Add autologon credential..."
Set-SCVMTemplate -VMTemplate $TempTemplate -AutoLogonCredential $DomainJoinCredential -AutoLogonCount 100 | Out-Null

Write-Host "Add Local administrator credential..."
$LocalAdministratorCredential = Get-SCRunAsAccount -Name "General Local Admin"
Set-SCVMTemplate -VMTemplate $TempTemplate -LocalAdministratorCredential $LocalAdministratorCredential | Out-Null

# add extra unattended build settings
$Unattend = $TempTemplate.UnattendSettings
# Unattend sections: 6 = oobesystem, 3 = specialize
# Required to make autologon credential work
$Unattend.Add("6/Microsoft-Windows-Shell-Setup/Autologon/Domain","rcmtech")
# Don't waste resources opening these at logon - especially true during build
$unattend.Add(“3/Microsoft-Windows-OutOfBoxExperience/DoNotOpenInitialConfigurationTasksAtLogon”, “true”)
$unattend.Add(“3/Microsoft-Windows-ServerManager-SvrMgrNc/DoNotOpenServerManagerAtLogon”, “true”)
# disable IE ESC or PowerShell build script will be blocked
$Unattend.Add("3/Microsoft-Windows-IE-ESC/IEHardenAdmin","false")
# Set localisation to UK
$unattend.Add(“6/Microsoft-Windows-International-Core/InputLocale”, “0809:00000809”)
$unattend.Add(“6/Microsoft-Windows-International-Core/SystemLocale”, “en-GB”)
$unattend.Add(“6/Microsoft-Windows-International-Core/UILanguage”, “en-GB”)
$unattend.Add(“6/Microsoft-Windows-International-Core/UserLocale”, “en-GB”)
Set-SCVMTemplate -VMTemplate $TempTemplate -UnattendSettings $Unattend | Out-Null

Write-Host "Build new VM configuration..."
$virtualMachineConfiguration = New-SCVMConfiguration -VMTemplate $TempTemplate -Name $NewVMName

The lines I added were the following, just before the Set-SCVMTemplate line:

# Set Owner info
$unattend.Add("7/Microsoft-Windows-Shell-Setup/RegisteredOrganization", "RCMTech")
$unattend.Add("7/Microsoft-Windows-Shell-Setup/RegisteredOwner", "Robin CM")

But this then caused the System Center Virtual Machine Manager service to crash at the “Customizing virtual machine” step.

The following errors were logged in the VM Manager event log on the VMM server:

Log Name:      VM Manager
Source:        Virtual Machine Manager
Date:          07/08/2013 09:08:18
Event ID:      1
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      SCVMM01.rcmtech.co.uk
Description:
System.NullReferenceException: Object reference not set to an instance of an object.
   at Microsoft.VirtualManager.WorkloadCreation.UnattendXMLFile.createSetting(UnattendXMLEntryInfo entryInfo, CreateOptions createOption)
   at Microsoft.VirtualManager.WorkloadCreation.UnattendXMLFile.Set(SysPrepEntry entry)
   at Microsoft.VirtualManager.WorkloadCreation.UnattendAnswerFile.GenerateAnswerFile(List`1 overridedSysPrepEntries)
   at Microsoft.VirtualManager.Engine.VmOperations.CustomizeVMSubtask.GenerateAnswerFile()
   at Microsoft.VirtualManager.Engine.VmOperations.CustomizeVMSubtask.RunSubtask()
   at Microsoft.VirtualManager.Engine.TaskRepository.SubtaskBase.Run()
   at Microsoft.VirtualManager.Engine.VmOperations.NewVmFromTemplateSubtask.PostVmCreationCustomize()
   at Microsoft.VirtualManager.Engine.VmOperations.NewVmSubtaskBase.RunNewVmSubtasks()
   at Microsoft.VirtualManager.Engine.VmOperations.NewVmSubtaskBase.RunSubtask()
   at Microsoft.VirtualManager.Engine.VmOperations.NewVmFromTemplateSubtask.RunSubtask()
   at Microsoft.VirtualManager.Engine.TaskRepository.SubtaskBase.Run()
   at Microsoft.VirtualManager.Engine.TaskRepository.Task`1.SubtaskRun(Object state)-2147467261

and

Log Name:      VM Manager
Source:        Virtual Machine Manager
Date:          07/08/2013 09:08:18
Event ID:      19999
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      SCVMM01.rcmtech.co.uk
Description: Virtual Machine Manager (vmmservice:4760) has encountered an error and needed to exit the process. Windows generated an error report with the following parameters: 
Event:VMM20 
P1(appName):vmmservice 
P2(appVersion):3.1.6027.0 
P3(assemblyName):SysPrepInfUtil 
P4(assemblyVer):3.1.6011.0 
P5(methodName):M.V.W.UnattendXMLFile.createSetting 
P6(exceptionType):System.NullReferenceException 
P7(callstackHash):90c5 
.

Removing the two lines that I added fixed this – but I’m disappointed that the SCVMM Service is so flaky and intolerant of errors.

I did guess at the values, but they seemed correct based on TechNet.

The problem seems to be that I got the numeric section value of 7 from some screenshots I took when documenting settings I used in WSIM from a previous incarnation of my automated build system:
WSIM Components
whereas the actual numbers used in the XML start at zero, not 1. So I changed the two lines to the following:

# Set Owner info
$unattend.Add(“6/Microsoft-Windows-Shell-Setup/RegisteredOrganization”, “RCMTech”)
$unattend.Add(“6/Microsoft-Windows-Shell-Setup/RegisteredOwner”, “Robin CM”)

And now it’s fine.

…Of course, as this is just putting in some static registry values I could also have done this in any number of other ways, but there you go!

 

 


Hyper-V 2012 VM RAM and NUMA nodes

$
0
0

I have some Dell R710 servers that have been running the free edition of XenServer 5.5 since 2009. They have 48GB RAM and two Xeon X5550 CPUs, and host five virtual machines, all running XenApp. This was done as part of a physical server consolidation plan, and I sized the VMs such that they used as much RAM as possible whilst allowing the minimum for the XenServer hypervisor. The VMs all had four CPUs and, by experimentation, one had 7000MB RAM and the other four had 9728MB.

This has been working fine ever since but I’ve recently been wanting to automate the (re)creation of these VMs and bring the hosts into line with my forthcoming Windows Server 2012 Remote Desktop Session Host replacement which will be running on Windows Server 2012 Hyper-V.

So, I know how to squeeze those VMs onto a XenServer host, but how to do it on Hyper-V? Things get interesting as I’d also like to take advantage of a small performance boost due to Hyper-V being NUMA aware, whereas XenServer 5.5 is not.

So, I did some research and found some VM RAM sizing information that seemed logical. However, due to NUMA, five VMs onto two NUMA nodes doesn’t go. Consider:

Host has 48GB RAM, 24GB per NUMA node (i.e. CPU socket). You can’t fit VMs using 7000MB, 9728MB, 9728MB, 9728MB, 9728MB onto the server, you always end up with one VM that won’t fit onto one or other of the 24GB NUMA nodes.

This wasn’t a problem with XenServer 5.5 as, not being NUMA aware, it just saw the 48GB RAM in the host as one big lump and distributed the VM CPUs wherever it liked (and I lived with the small performance hit and wasn’t too fussed). But now I can do something about it, because I have a NUMA aware hypervisor, and it’d be wrong to ignore NUMA. (even though I could just turn on NUMA spanning for the host and VMs, but that feels a little defeatist!)

So, I decided the chop the small 7000MB VM into two smaller VMs, as then I could run my NUMA nodes with a 3500MB VM plus two 9728MB VMs. But based on the sizing info, and to allow for OS overhead whilst maintaining similar RAM for user applications, I decided to give the “smaller” VMs 4000MB RAM, and tweak down the RAM on the “larger” VMs, I also gave the “smaller” VMs two rather than four CPUs – the load will be spread across them by XenApp so there’ll still be four CPUs in total servicing their applications/users. So now each of my two NUMA nodes/CPU sockets should have the following sized VMs on it: 4000MB + 9500MB + 9500MB.

The RAM overhead should be 32MB for the first 1GB RAM, plus 8MB per additional 1GB, so about 55MB for a 4000MB VM, and about 98MB for a 9500MB VM.

With 48 * 1024 = 49152MB in the host that leaves 49152-((4000+55)+((9500+98)*2))*2 = 2650MB for the host OS. Which should be fine.

Except it’s not, for two reasons.

One: NUMA. I powered on five of the VMs, and the sixth refused to power on due to no enough RAM being available. I had to make it really quite small to get it to power on. Why was this? Hyper-V had placed the two smaller VMs on the same NUMA node, plus one of the larger ones, and there’s not room for three larger VMs on one NUMA node. How can you tell which NUMA node your VMs are running on? On the host use the Hyper-V VM Vid Partition\Preferred NUMA Node Index performance monitor counter, or run the following PowerShell command line:

(Get-Counter -ComputerName "HVHost01" -Counter "Hyper-V VM Vid Partition(*)\Preferred NUMA Node Index").CounterSamples | Select-Object -Property InstanceName,CookedValue | Where-Object -Property InstanceName -NE "_total"

Hyper-V does allow you to specify which NUMA node a VM will run in, but it’s a little fiddly.

Two: The RAM sizing information doesn’t seem to work. Thus I’ve ended up with my VMs sized as follows: 2 x 3850MB and 4 x 9100MB, and configured to run on specific NUMA nodes as above so that they are guaranteed to fit.

 


Set Hyper-V 2012 VM NUMA node

$
0
0

Run the following PowerShell command line and you can see what NUMA node your VMs are running on, assuming you a) have NUMA aware host hardware and b) haven’t enabled NUMA spanning:

(Get-Counter -ComputerName "HFR3-UWE12" -Counter "Hyper-V VM Vid Partition(*)\Preferred NUMA Node Index").CounterSamples | Select-Object -Property InstanceName,CookedValue | Where-Object -Property InstanceName -NE "_total"

This will give you output similar to the following:

InstanceName           CookedValue
------------           -----------
VM01                             0
VM02                             1
VM03                             1
VM04                             0

Where the number in the CookedValue column is the NUMA node.

This will be set when the VM is powered on, and will remain the same until the VM is powered off or Live-migrated off the host.

If you need to fit some VMs onto a host and are tight on RAM you might want to override this and specify which NUMA node a VM will be placed onto when you power it on. There are two ways to accomplish this: use WMI or edit the VM configuration XML file.

The following PowerShell will let you modify the setting for a VM. Note that the VM must be powered off for this to work. It was initially based on this, which didn’t work, so I then re-wrote it with help from here.

$VMName = "VM01"
$HostName = "HVHost01"

# Set to 1 to enable NUMA Node preference
$NumaNodesAreRequired = 1
# Set to the preferred NUMA node number (starting at 0)
$NumaNodeList = 1

$VM = Get-WmiObject MSVM_ComputerSystem -Namespace root\virtualization -ComputerName $HostName | where-object -property ElementName -eq $VMName
$VMManagementService = Get-WmiObject -Namespace root\virtualization -Class Msvm_VirtualSystemManagementService -ComputerName HFR3-UWE12
Write-Host "Before:"
foreach($VSSettingData in $VM.GetRelated("MSVM_VirtualSystemSettingData")){
    Write-Host ("VM Name  : "+$VSSettingData.ElementName)
    Write-Host ("NUMA Rqd.: "+$VSSettingData.NumaNodesAreRequired)
    Write-Host ("NUMA Node: "+$VSSettingData.NumaNodeList)
}
foreach($VSSettingData in $VM.GetRelated("MSVM_VirtualSystemSettingData")){
    $VSSettingData.NumaNodeList = @($NumaNodeList)
    $VSSettingData.NumaNodesAreRequired = $NumaNodesAreRequired
    $VMManagementService.ModifyVirtualSystem($VM, $VSSettingData.PSBase.GetText(1)) | Out-Null
}
Write-Host "After:"
$VM = Get-WmiObject MSVM_ComputerSystem -Namespace root\virtualization -ComputerName $HostName | where-object -property ElementName -eq $VMName
foreach($VSSettingData in $VM.GetRelated("MSVM_VirtualSystemSettingData")){
    Write-Host ("VM Name  : "+$VSSettingData.ElementName)
    Write-Host ("NUMA Rqd.: "+$VSSettingData.NumaNodesAreRequired)
    Write-Host ("NUMA Node: "+$VSSettingData.NumaNodeList)
}

So we’re getting a WMI object for the VM specified ($VM), pulling out the VirtualSystemSettingData into $VSSettingData (note that this is actually in XML, even though PowerShell allows us to treat it like an object) and displaying the current NUMA node data. Then we change the NUMA node data and write the XML back via the WMI method ModifyVirtualSystem (hence the need to use the .PSBase.GetText(1) method. You have to set the NumaNodesAreRequired value to 1 (True) if you want to specify the NUMA node for the VM, otherwise the settings you specify in NumaNodeList is just a preference and the VM might actually end up on a different node. We then get the VM WMI object again so that you can check that the values have changed (they won’t if the VM is running).

I’d probably go with the WMI method, but it’s interesting to see the values in the XML so here’s where to look:

<configuration>
  <settings> 
    <numa_node_mask type="integer">1 
    <numa_nodes_required type="bool">True
  </settings>
</configuration>

Note that in the PerfMon counters and the PowerShell script above the NUMA nodes are referenced starting at 0, but in the XML they start at 1. The two settings seem to be inserted between the memory and processors sections when using the PowerShell code above.


AppLocker blocking getpaths.cmd

$
0
0

When you configure the default AppLocker Script rules in a Group Policy Object (GPO) one of the ones it adds is for:

%OSDRIVE%\*\temp\*\getpaths.cmd

Except when a user logs on, if you’ve enabled the AppLocker MSI and Script event log, you still get the following event logged:

Log Name: Microsoft-Windows-AppLocker/MSI and Script
 Source: Microsoft-Windows-AppLocker
 Date: 20/08/2013 11:37:21
 Event ID: 8007
 Task Category: None
 Level: Error
 Keywords:
 User: RCMTech\JohanSmythe
 Computer: RDS2012-01.rcmtech.co.uk
 Description:
 C:\USERS\JOHA~1\APPDATA\LOCAL\TEMP\2\GETPATHS.CMD was prevented from running.

Which is a nuisance. I’ve not noticed anything bad as a result of this being blocked, but I’m going to assume that it shouldn’t be because there’s a default rule that looks like it should allow it to run. Plus it’s messy to have errors being logged unnecessarily.

So after a bit of experimentation, it seems as though at least part of the problem is the %OSDRIVE% “special” AppLocker variable. I’ve now added a rule for the following path, and getpaths.cmd is running fine:

C:\*\AppData\Local\Temp\*\getpaths.cmd

Add-RDSessionHost printer group policy error

$
0
0

I was trying to add a Windows Server 2012 Remote Desktop Session Host to a collection using Add-RDSessionHost. The command was working in so far as it was adding the server to the session collection, but it was then throwing an exception:

The property RedirectClientPrinter is configured by using Group Policy 
settings. Use the Group Policy Management Console to configure this property.
At C:\windows\system32\windowspowershell\v1.0\Modules\RemoteDesktop\SessionDesk
topCollection.psm1:389 char:70
+             Invoke-Command -Session $workflowSession -ArgumentList 
@($Connection ...
+                                                                      
~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (:) [Add-RDSHServer], RDManageme 
   ntException
    + FullyQualifiedErrorId : GPSettingfailed,Microsoft.RemoteDesktopServices. 
   Management.Cmdlets.AddRDSessionHostServerCommand

which was causing my script to stop.

After some investigation, it turns out that this is because the following group policy item was set in a GPO linked to an OU above where the server object lived in Active Directory:

  • Computer Configuration
    • Policies
      • Administrative Templates
        • Windows Components
          • Remote Desktop Services
            • Remote Desktop Session Host
              • Printer Redirection
                • Do not allow client printer redirection = Enabled

I tried setting this to Disabled, but got the same error. I temporarily blocked group policy inheritance to the OU that the RDSH server lived in and that fixed it, so it seems that the only setting that’s allowed for this policy (at least in order for the Add-RDSessionCollection cmdlet to work correctly) is Not configured.

Whilst diagnosing the above I also inadvertently ended up changing the error slightly to:

The property UseRDEasyPrintDriver is configured by using Group Policy 
settings. Use the Group Policy Management Console to configure this property.
At C:\windows\system32\windowspowershell\v1.0\Modules\RemoteDesktop\SessionDesk
topCollection.psm1:389 char:70
+             Invoke-Command -Session $workflowSession -ArgumentList 
@($Connection ...
+                                                                      
~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (:) [Add-RDSHServer], RDManageme 
   ntException
    + FullyQualifiedErrorId : GPSettingfailed,Microsoft.RemoteDesktopServices. 
   Management.Cmdlets.AddRDSessionHostServerCommand

which was caused by me setting the Use Remote Desktop Easy Print printer driver policy setting to something other than Not configured (i.e. Enabled or Disabled).

This error seems, as it states, to only care if these settings are set via Group Policy. The state of the tick box in the Session Collection configuration Client Settings, Printers section, namely Allow client printer redirection, seems irrelevant.

As to why the Add-RDSessionCollection PowerShell cmdlet cares about these (on the face of it perfectly legitimate) group policy settings, I have no idea.



Remote Desktop Connection: Can’t connect

$
0
0

On trying to connect to a Windows Server 2012 Remote Desktop Session Host published desktop I got the following error from my Remote Desktop Client:

Remote Desktop Connection
This computer can't connect to the remote computer.
Try connecting again. If the problem continues, contact the owner of the remote computer or
your network administrator.
[OK] [Help]

Which was annoying, I’d just built the RDSH server and via Server Manager – Remote Desktop Services everything looked ok for it: the TermService was running, Remote Desktop Services and Remote Desktop Session Host roles were present. Drilling down into Collections and looking in the collection containing the published desktop and this particular RDSH server, Allow New Connections was set to True.

Going back to Server Manager, Remote Desktop Services, Servers, I selected the connection broker server. In the Events section there were three events listed at the time when my client connection failed:

Log Name:      Microsoft-Windows-TerminalServices-SessionBroker/Admin
Source:        Microsoft-Windows-TerminalServices-SessionBroker
Date:          22/08/2013 11:26:16
Event ID:      802
Task Category: RD Connection Broker processes connection request
Level:         Error
Keywords:      
User:          NETWORK SERVICE
Computer:      RDCB2012-01.rcmtech.co.uk
Description:
RD Connection Broker failed to process the connection request for user RCMTECH\user. 
Error: Element not found.

Log Name:      Microsoft-Windows-TerminalServices-SessionBroker-Client/Operational
Source:        Microsoft-Windows-TerminalServices-SessionBroker-Client
Date:          22/08/2013 11:26:16
Event ID:      1296
Task Category: RD Connection Broker Client processes request from a user
Level:         Error
Keywords:      
User:          NETWORK SERVICE
Computer:      RDCB2012-01.rcmtech.co.uk
Description:
Remote Desktop Connection Broker Client failed while getting redirection packet from Connection Broker.
User : RCMTECH\user
Error: Element not found. 

Log Name:      Microsoft-Windows-TerminalServices-SessionBroker-Client/Operational
Source:        Microsoft-Windows-TerminalServices-SessionBroker-Client
Date:          22/08/2013 11:26:16
Event ID:      1306
Task Category: RD Connection Broker Client processes request from a user
Level:         Error
Keywords:      
User:          NETWORK SERVICE
Computer:      RDCB2012-01.rcmtech.co.uk
Description:
Remote Desktop Connection Broker Client failed to redirect the user RCMTECH\user. 
Error: NULL

Which turned out to be caused by Remote Connections not being enabled for the server. This is the setting that you control via a right-click on the pop-up Start button – System – Remote settings. You can also change it via a script, I have one here. I clearly still need to add that setting into my automated server build process!


PowerShell and double quotes on the command line

$
0
0

I’ve had issues passing in parameters to a PowerShell script that need double quotes around them, due to containing a space. Consider the following very basic PowerShell script:

param([string]$Something = "")
Write-Host "I received $Something"

This just writes out what the script received as the parameter. I’ve saved the script with a filename of C:\Scripts\ParamCheck.ps1.

From a cmd prompt, execute the script as follows:

powershell.exe C:\Scripts\ParamCheck.ps1 -Something "one two"

gives:

I received one

Here’s some ways to make this work properly:

Use single quotes:

powershell.exe C:\Scripts\ParamCheck.ps1 -Something 'one two'

Use a backslash delimiter (even though PowerShell internally uses the left-single-quote `):

powershell.exe C:\Scripts\ParamCheck.ps1 -Something \"one two\"

Use three double quotes:

powershell.exe C:\Scripts\ParamCheck.ps1 -Something """one two"""

All three of the above give the expected output of:

I received one two

Done.


Server 2012 Remote Desktop Session Host: Installation hangs at Windows Installer Coordinator

$
0
0

You install a product from a .msi, and whilst Windows Installer is running you get an additional progress dialogue:

Windows Installer Coordinator
Please wait while the application is preparing for the first use.
[Cancel]

Had this on a 2008 R2 RDSH server, and found KB2655192 which says that the affected products are all editions of Windows Server 2008 R2. But I tested and had the same problem on Windows Server 2012 when configured as a remote desktop session host.

The same workaround sorts it. I used the registry method, create a DWORD called Enable with a value of 0 (zero) at

HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows NT\Terminal Services\TSAppSrv\TSMSI

There’s no need to reboot after setting this value, just set it, run your problematic installer, then delete the value again afterwards.

For info, I had this when installing IBM SPSS Statistics 20.


Set Internet Explorer Address Bar Search via Group Policy

$
0
0

OK, this is kind of a cheat, this is using Group Policy Preferences (GPPrefs) to add a set of registry entries, it’s not a custom .admx or something more fancy.

But it does work.

Firstly, I suggest you create a new registry collection. This helps group the settings together, and allows you to use targeting on the collection, rather than having to set it in each individual GPPref registry item.

  1. Open Group Policy Management and create a new GPO or edit an existing one.
  2. Go to User Configuration, Preferences, Windows Settings, Registry
  3. Right-click and choose New – Registry Collection, call it something like IE Search Provider

There are seven registry items to add, create them all within your new registry collection. As per KB918238 we should generate a GUID (Globally Unique IDentifier) to use to refer to each search provider. This is pretty easy, from a PowerShell prompt use:

[guid]::NewGUID()

and you’ll be returned a new GUID, e.g. I just got 8a7d6e1b-d2ea-41e5-aede-1ed0c21913c9. I’m using this GUID below, you can either use this one or generate your own. I’m setting the address bar search provider and suggestions to Google UK.

The registry items to add are as follows:

Key: HKEY_CURRENT_USER\Software\Policies\Microsoft\Internet Explorer\SearchScopes

Value Name                            Type       Value Data
DefaultScope                          REG_SZ     {8a7d6e1b-d2ea-41e5-aede-1ed0c21913c9}
ShowSearchSuggestionsInAddressGlobal  REG_DWORD  1

Key: HKEY_CURRENT_USER\Software\Policies\Microsoft\Internet Explorer\SearchScopes\{8a7d6e1b-d2ea-41e5-aede-1ed0c21913c9}

Value Name                            Type       Value Data
DisplayName                           REG_SZ     Google UK
FaviconURL                            REG_SZ     http://www.google.com/favicon.ico
SuggestionsURL                        REG_SZ     http://clients5.google.com/complete/search?q={searchTerms}&client=ie8&mw={ie:maxWidth}&sh={ie:sectionHeight}&rh={ie:rowHeight}&inputencoding={inputEncoding}&outputencoding={outputEncoding}
ShowSearchSuggestions                 REG_DWORD  1
URL                                   REG_SZ     http://www.google.co.uk/search?q={searchTerms}&sourceid=ie7&rls=com.microsoft:{language}:{referrer:source}&ie={inputEncoding?}&oe={outputEncoding?}

And that’s it. This definitely works with IE 10, but should work with versions 7, 8 and 9 too.

Note that due to the search provider being set in the Software\Policies section of the registry, users will not be prompted as to whether or not they want their search provider changed to whatever you specify above.

Thanks to this and this for pointing me in the right direction.


VMM 2012 SP1: VMM cannot find HardwareProfile object

$
0
0

Using System Center Virtual Machine Manager 2012 SP1 to create a VM from a template. Template is called Server2008R2, trying to create a VM called Server2008R2IE10.

On the Create Virtual Machine Wizard I had to change the “Computer name” in the “Configure Operating System” section as the wizard told me that it had to be 15 characters or less. So I changed the Computer name to Server2008R2. VM name is still Server2008R2IE10.

The wizard hangs for a bit at the Select Host stage, then throws an error:

VMM cannot find HardwareProfile object ee891155-3afd-40b1-adc8-44dc2ef7bd42
Ensure the library object is valid, and then try the operation again.
ID: 801
[Close] [Copy Text]

I went back and changed the OS name as I suspected it might be because there’s a template called the same thing as I’m trying to set the OS (computer) name to, in this case Server2008R2. Having changed the name (I added -2 to the end) the wizard completes normally.


Internet Explorer 10 not saving cookies when Roaming Profile used

$
0
0

Have just upgraded my Windows Server 2008 R2 XenApp 5.0 servers from IE 9 to IE 10 and noticed that cookies were not working after a logoff. Everything was fine whilst the user was logged on, so cookies were working within that Windows session. But log off Windows and log back on again and any website settings stored in cookies were lost.

The cookies themselves default to the following folder:

%USERPROFILE%\AppData\Roaming\Microsoft\Windows\Cookies\Low

(the Low folder is assuming you run IE in Protected mode, which you should)

If you use the AppData(Roaming) Folder Redirection Group Policy setting to redirect the Roaming folder (which you should) then the folder will obviously end up in a folder down the path you specified instead.

The cookies themselves are just .txt files with eight character hexadecimal names, and do get stored in the Cookies folder, but you’ll notice that they’re not read when you log back on to Windows, instead new cookies are created.

This is because as of IE 10 there’s a different format and location for the database that keeps track of which cookie file to use with each website you visit. This file lives here:

%userprofile%\AppData\Local\Microsoft\Windows\WebCache\WebCacheV01.dat

As you may know, the whole of the AppData\Local folder gets deleted when you log off, if you have a roaming profile.

The workaround to make cookies work is to set a Group Policy Preference to set the following registry value:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders\Local AppData

to point to a user folder on a file server, possibly a subfolder of the one where you already redirect the AppData(Roaming) to. The folder you specify will be created when the user logs on.

Note that the initial size for the WebCacheV01.dat file is about 33MB, so if you have a lot of users you might want to monitor the space usage on your file server more closely for a while after making the above change.

Also, you shouldn’t end up with all your Temporary Internet Files being redirected as these will stay in the Local part of the profile thanks to the registry value:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders\Cache

which points specifically to a folder within the %USERPROFILE%\AppData\Local folder structure.

Also, I’ve not had this running live in my environment for long, so there may be undesired side effects that I’ve yet to notice. If you find any please let me know!


Disable Windows Error Recovery

$
0
0

Sometimes I find that one of my Windows VMs isn’t pinging the morning after I have released security updates. I have a look at the console of the VM and it looks like this:
System Recovery Options

What causes Windows to boot into the Recovery Environment? It happens if the previous boot failed. You actually get a boot menu before this, which looks like the following:
Windows Error Recovery countdown

As you can see, the Launch Startup Repair option is the default, and thus you end up with the server (or PC) not booting properly. If you follow through the repair process Windows can do a rollback to a system recovery point prior to when the machine was joined to your domain, which is fun. That’s less likely on servers than PCs, but it’s annoying that the server sits at the recovery screen. Invariably when I click Cancel and reboot the server it boots up fine, thus I am going to disable this functionality by issuing the following command on all my servers (and possibly go on to do all the PCs too):

reagentc /disable

Now they’ll just complain that they failed to boot but retry a normal boot:
Windows Error Recovery Start Windows Normally

Note that the reagent.exe utility is only available on Windows Server 2008 R2 and higher.



Automatically Accept Windows Remote Assistance Requests

$
0
0

There doesn’t seem to be a built-in way to do this on Windows Server 2008 R2. What we’re trying to achieve is a way to use Windows Remote Assistance to remotely control the session of a user, without that user needing to be present. This is a fairly rare scenario, but I’ve come across it for certain nasty bits of so-called “server” software that actually run as desktop applications (rather than services) and thus require a permanently logged on desktop that remote users can “hop on to” without the local user needing to press any buttons. Another use could be for a multimedia playback or wall-mounted display PC.

Obviously you could use a different piece of remote control software such as VNC but personally I prefer to stick to something that is built-in to Windows. OK, a confession, you do need to use 3rd party software, but not to provide the remote control or authentication. What I’m using for this solution is a very handy utility called AutoHotkey. It doesn’t need to be installed, you just copy the exe onto the system.

This solution allows specified users/groups to remote control an active desktop session of individually specified users on a Windows Server 2008 R2 machine. I’ve yet to test this method on 2008 or 2012. So to clarify, the users that are logged on to the server desktop have to be specified individually, you can’t make this work for any user who logs on (this is due to UAC). The remote users however can be specified by group membership. I’ve not tested any Windows Client OS, e.g. Vista, 7, 8. That said, this should work, even if it needs a slight modification.

The steps are as follows. These should be completed on the machine that you wish to remotely control.

Step 1 is to download AutoHotkey, create an AutoHotkey folder in Program Files(x86)and copy autohotkey.exe into it.

Step 2 is to put the following AutoHotkey script into a file called AcceptRemoteAssistance.ahk and put it into the same folder as AutoHotkey.exe

; Auto-Accept Remote Assistance requests for Windows Server 2008 R2
; rcmtech.co.uk, Sept. 2013
;
; keep this script running permanently
#Persistent
; set a timer to fire off the subroutine every 500msecs
SetTimer, CheckActiveWindow, 500

CheckActiveWindow:
; check to see if the active window is one of the Windows Remote Assistance ones that needs attention
if WinActive("Windows Remote Assistance","What are the privacy and security concerns","Being helped by"){
	; could be one of several different dialogue boxes that match the above criteria
	; only way to differentiate is by checking the size of the dialogue box
	WinGetActiveStats, Title, xDim, yDim, xPos, yPos
	if (yDim = 200){
		; or use xDim = 366
		; this is the initial "Would you like  to connect to you computer?"
		ControlGetText, ControlText, Button1, Windows Remote Assistance
		; verify that the expected button is the "Yes" button
		if (ControlText = "Yes"){
			; click the "Yes" button
			ControlClick, Yes, Windows Remote Assistance
		}
	}
	if (yDim = 181){
		; this is the subsequent "Would you like to allow  to share control of your desktop?"
		ControlGetText, ControlText, Button1, Windows Remote Assistance
		; verify that the expected button is the "yes" button
		if (ControlText = "Yes"){
			; tick the "Allow  to respond to User Account Control prompts" tickbox
			Click, 22, 117
			sleep 500
			; click the "Yes" button
			ControlClick, Yes, Windows Remote Assistance
		}
	}
}
return

Step 3 is to create a scheduled task to trigger the AutoHotkey script to run at logon for each local session user that needs to have Remote Assistance requests accepted automatically on behalf of. Note that the user(s) you are specifying here is/are NOT the people who will be initiating Remote Assistance requests to the computer (although they may be the same).

  1. Click Start, type Task Scheduler and open the Task Scheduler GUI
  2. Click on Task Scheduler Library
  3. Click Create Task…
  4. General tab:
    1. Name the task, something like Auto-Accept Remote Assistance for <user>
    2. Click the Change User or Group… button and set to the username for the desired user
    3. Tick the Run with highest privileges box
    4. Choose Configure for: whatever OS you’re currently running
  5. Triggers tab:
    1. Click New…
    2. Change Begin the task: to At log on
    3. Click Specific user: then the Change User… button and pick the same username as in 4B
    4. Click OK
  6. Actions tab:
    1. Click New…
    2. Action: should be Start a program
    3. Click Browse… and pick the AutoHotkey.exe executable file
    4. In the Add arguments (optional): box enter the full path to the AcceptRemoteAssistance.ahk file, including the usual double quotes if the path has spaces in it.
    5. Click OK
  7. Settings tab:
    1. Untick Stop the task if it runs longer than:
  8. Click OK

Step 4. Add the users who need to be able to remotely control the computer into the local group Offer Remote Assistance Helpers. This involves having to use Group Policy, even if only locally, you can run the local group policy editor by running gpedit.msc. The policy to set is found in Computer Configuration, Policies, Administrative Templates, System, Remote Assistance and is called Configure Offer Remote Assistance. You need to enable the policy, choose the option to allow helpers to remotely control the computer, and then add the users/groups via the Show… button. If the policy isn’t set, the Offer Remote Assistance Helpers group isn’t present, and if you add users/groups directly to that group they’ll be removed again on every group policy refresh interval (how annoying is that?).

Step 5. Disable the screensaver, and ensure that the On resume, display logon screen box is NOT ticked in the Screen Saver Settings dialogue.

Step 6. (optional) Use AutoLogon to get the computer to log on automatically if it reboots.

Step 7. Log off the computer and then back on again. You should find that the scheduled task has caused AutoHotkey to load and it should be running in the notification area. To verify that the script is running you can right-click the AutoHotkey icon and choose Edit This Script – the script should open in Notepad. If this is the first time the user has logged on to the computer the Scheduled Task seems not to work, possibly because the profile creation process takes longer than subsequent profile loads. In this case just log off and back on again a second time.

Now, from a computer logged on as one of the users you added to the remote computer’s Offer Remote Assistance Helpers group, you can start a Remote Assistance session to the computer. Do this either via the GUI:

  1. Click Start, type Assistance, click Windows Remote Assistance
  2. Click Help someone who has invited you
  3. Click the link Advanced connection option for help desk
  4. Fill out the computer name and click Next

Or via the command line:

msra /offerra <computername>

In both cases you may need to pick the user/session to connect to. Then the AutoHotkey script should kick in on the remote computer and click the Yes button. You’ll then need to click the Request control button in the Windows Remote Assistance GUI, at which point the script will kick in again and tick the Allow <name> to respond to User Account Control prompts box and then click the Yes button.

In case you’re worried, the AutoHotkey CPU usage seems to be so low as to be practically undetectable on my test VM (though admittedly it’s running on a host with E5-2690 CPUs).

To end the session the best way I’ve found is to just close the remote Windows Remote Assistance – Being helped by <name> utility. If you close your local Windows Remote Assistance – Helping <name> then you end up with an “orphaned” (albeit inactive) WRA process left running on the remote computer.

If the AutoHotkey script doesn’t work it may be that the dialogue boxes are not opening with the same dimensions as on my systems. AutoHotkey comes with a utility called Window Spy (AU3_Spy.exe) which will tell you the active window width and/or height, compare these values to the ones in my script and modify as necessary. These are the only things I’ve found that differentiate the various different dialogue boxes titled Windows Remote Assistance. I know it’s a bit nasty, leave me a comment if you find a nicer way to do it!

Update: Have now created a new version of the script for Windows Server 2003, see below. Additionally, due to the lack of UAC on this OS, you can just create a shortcut to run the AutoHotkey script and put it in the All Users Start Menu Startup folder, no need for the scheduled task. Thus, this method will work for any logged on user on this OS.

; Auto-Accept Remote Assistance requests for Windows Server 2003
; rcmtech.co.uk, Sept. 2013
;
; keep this script running permanently
#Persistent
; set a timer to fire off the subroutine every 500msecs
SetTimer, CheckActiveWindow, 500

CheckActiveWindow:
; check to see if the active window is one of the Windows Remote Assistance ones that needs attention
if WinActive("Remote Assistance","","Webpage Dialog",""){
	; could be one of several different dialogue boxes that match the above criteria
	; only way to differentiate is by checking the size of the dialogue box
	WinGetActiveStats, Title, xDim, yDim, xPos, yPos
	if (yDim = 178){
		; or use xDim = 405
		; this is the initial "Would you like  to connect to you computer?"
		; click the "Yes" button
		Click, 294, 147
	}
}
if WinActive("Remote Assistance -- Webpage Dialog"){
	; this should be the dialogue that opens when the helper clicks "Request Control"
	if (yDim = 340){
		; this is the subsequent "Would you like to allow  to share control of your desktop?"
		; click the "Yes" button
		Click, 297, 165
	}
}
return

If you modify the script for any other OSs please let me know.


External IT Consultants

$
0
0

I’ve just read this article on the Microsoft TechNet UK blog. On the whole I agree with the suggestions in the article. However, the point “Get the right expertise in” concerns me, as based on my experience, IT managers (or worse, non-IT managers) are all too eager to ship in external skills/advisors/consultants having ignored their in-house IT staff.

This is all very well, consultants can certainly provide good value and good solutions, but listening to your own in-house IT staff is also extremely important for managers to do. More often than not, the IT techs know where the problem areas are, and have a good idea how to fix them. If they’re not able to identify or fix problems then perhaps the organisation is not investing in its staff enough and more training should be provided.

Good in-house IT staff will know the systems they work with far more intimately than any external consultants will. Yes, we use industry standard products, but no, we do not configure or integrate them in a standard way. I’ve seen consultants completely break systems due to them making assumptions about how something should be operating which any decent internal IT tech (i.e. me) would have known about – had they been asked to be involved.

Once the install/upgrade/work package is over consultants leave, taking all their knowledge and experience with them. Whereas, by using in-house staff you retain and build upon the knowledge gained during the install/upgrade, which is invaluable for the on-going support and maintenance of the system. Nothing gives an IT tech more confidence with supporting a system than being trained and then actually implementing the system, dealing with issues as they arise during the trial and pilot phases.

Finally, training your in house staff and getting them to do the work is likely to be more cost effective in all of the short, medium and long term and will not give you the “dip” in the level of support provided during the time when the consultants leave and the in-house operations staff are building up their knowledge and confidence.

Likewise, listening to the word of pre-sales engineers over and above those of your own staff is a mistake. Sales staff are there to make sales, and however much this should not happen (especially in environments that profess to follow best practice guidance such as ITIL and PRINCE2) I’ve seen it all too often where technology is mis-sold to non-technical managers without the IT staff being consulted until it’s a “done deal”. This is shooting yourself in the foot as the possibly quite expensive system then doesn’t perform how it was supposed to yet you’re stuck with it having just blown a sizable chunk of your budget on it. Worse still is the managers responsible not admitting to more senior business managers that a mistake has been made. The subsequent implementation delays, rising costs, poor performance and on-going support problems then make the IT staff look incompetent when in fact the incompetence lies with the management who blindly bought the system in the first place.

To summarise: invest in and fully utilise your in-house IT staff – it’ll be to everyone’s benefit. Use consultants sparingly, for specialist work, and ensure your in-house staff are fully involved throughout.


RemoteApp Default Connection URL Group Policy not working

$
0
0

As of Windows 8 and Server 2012 you can configure the RemoteApp and Desktop Connection URL automatically via Group Policy. The policy setting is found in:

  • User Configuration
    • Policies
      • Administrative Templates
        • Windows Components
          • Remote Desktop Services
            • RemoteApp and Desktop Connections
              • Specify default connection URL

I set this policy for users of my forthcoming Server 2012 RDSH desktop environment, but when the users log on the RemoteApp and Desktop Connections control panel still says:

There are currently no connections available on this computer.

I had a look in the Event Logs and found this entry at the time the user logged on to the desktop:

Log Name: Microsoft-Windows-RemoteApp and Desktop Connections/Admin
Source: Microsoft-Windows-RemoteApp and Desktop Connections
Date: 07/10/2013 12:59:00
Event ID: 1026
Task Category: None
Level: Warning
Keywords:
User: RCMTECH\user
Computer: RDSH01.rcmtech.co.uk
Description:
The installation of the default connection has been cancelled. A default connection cannot be used on a system that is part of a Remote Desktop Services deployment.

User: RCMTECH\user

which rather says it all. I have my RDSH servers split into two groups: Those that publish desktops and those that publish applications. They each have their own Collection, but both share the same Connection Broker. It seems as though RemoteApp doesn’t allow this.

I then tried to configure the URL manually by going via Control Panel, but when I enter the URL and click through the wizard I get the following:

Access RemoteApp and desktops
An error occurred.
An error occurred. Contact your workplace administrator for assistance.

I get this both as a user and also if I log on to the desktop as an administrator. This seems to be linked to my using User Profile Disks – if I stop using those for the collection then I can at least run through the wizard manually (but this isn’t something I want my users to have to do).

I’m hoping the thing that’s stopping the Group Policy configuration method from working is because I’m sharing the Connection Broker, but I have a nasty feeling that for some reason Microsoft have decided that you’re not allowed to publish RemoteApp applications at all to a Remote Desktop Collection.

Update: I built a separate RD deployment and used its configuration URL instead but still got the same message. It does indeed seem that you are just not able to use a RemoteApp default connection URL at all within a Remote Desktop deployment. I have no idea why this should be, it seems to have been deliberately blocked by Microsoft. Anyone know why this might be?


NetBackup Exchange status code 1542

$
0
0

Exchange 2007 CCR, NetBackup 7.5 policy is configured to backup from the passive node.

Backups for the cluster fail with code 1542:

An existing snapshot is no longer valid and cannot be mounted for subsequent operations(1542)

The detailed status shows something like:

29/10/2013 08:11:19 - Info nbjm(pid=4464) starting backup job (jobid=1731878) for client exch-cluster, policy EXCH_CLUSTER_SG5, schedule Daily_Full  
29/10/2013 08:11:19 - Info nbjm(pid=4464) requesting STANDARD_RESOURCE resources from RB for backup job (jobid=1731878, request id:{CB0B27C8-A3AF-4176-B14A-2E76AC254B65})  
29/10/2013 08:11:19 - requesting resource Adv_Disk_Group
29/10/2013 08:11:19 - requesting resource bgen-uwe10.NBU_CLIENT.MAXJOBS.exch-cluster
29/10/2013 08:11:19 - requesting resource bgen-uwe10.NBU_POLICY.MAXJOBS.EXCH-CLUSTER_SG5
29/10/2013 08:11:19 - granted resource bgen-uwe10.NBU_CLIENT.MAXJOBS.exch-cluster
29/10/2013 08:11:19 - granted resource bgen-uwe10.NBU_POLICY.MAXJOBS.EXCH-CLUSTER_SG5
29/10/2013 08:11:19 - granted resource MediaID=@aaaai;DiskVolume=D:\;DiskPool=nbu2_adv_d;Path=D:\;StorageServer=nbu2;MediaServer=nbu2
29/10/2013 08:11:19 - granted resource nbu2_adv_d
29/10/2013 08:11:19 - estimated 1175018 Kbytes needed
29/10/2013 08:11:19 - begin Parent Job
29/10/2013 08:11:19 - begin Snapshot, Start Notify Script
29/10/2013 08:11:19 - Info RUNCMD(pid=9740) started            
29/10/2013 08:11:19 - Info RUNCMD(pid=9740) exiting with status: 0         
Status 0
29/10/2013 08:11:19 - end Snapshot, Start Notify Script; elapsed time: 00:00:00
29/10/2013 08:11:19 - begin Snapshot, Step By Condition
Status 0
29/10/2013 08:11:19 - end Snapshot, Step By Condition; elapsed time: 00:00:00
29/10/2013 08:11:19 - begin Snapshot, Read File List
Status 0
29/10/2013 08:11:19 - end Snapshot, Read File List; elapsed time: 00:00:00
29/10/2013 08:11:19 - begin Snapshot, Create Snapshot
29/10/2013 08:11:19 - started
29/10/2013 08:11:21 - started process bpbrm (3352)
29/10/2013 08:11:30 - Info bpbrm(pid=3352) exch-cluster is the host to backup data from     
29/10/2013 08:11:30 - Info bpbrm(pid=3352) reading file list from client        
29/10/2013 08:11:30 - Info bpbrm(pid=3352) start bpfis on client         
29/10/2013 08:11:30 - Info bpbrm(pid=3352) Starting create snapshot processing         
29/10/2013 08:11:32 - Info bpfis(pid=9564) Backup started           
29/10/2013 08:11:48 - Info bpbrm(pid=3352) from client exch-nodeb: TRV - Redirecting snapshot backup to server (exch-nodeb)  
29/10/2013 08:11:48 - Info bpbrm(pid=3352) Read redirected snapshot host egen-mbx2b from client egen-mbx02INF - BACKUP_HOST=exch-nodeb   
29/10/2013 08:11:49 - Info bpfis(pid=9564) done. status: 0          
29/10/2013 08:11:49 - end Snapshot, Create Snapshot; elapsed time: 00:00:30
29/10/2013 08:11:50 - Info bpbrm(pid=3352) Starting delete snapshot processing         
29/10/2013 08:11:50 - Info bpfis(pid=9564) Deleting Snapshot exch-nodeb_1383034279 for client exch-nodeb       
29/10/2013 08:11:53 - Info bpfis(pid=4336) Backup started           
29/10/2013 08:11:53 - Critical bpbrm(pid=3352) from client exch-nodeb: FTL - cannot open C:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.egen-mbx02_1383034279.1.0    
29/10/2013 08:11:54 - Info bpfis(pid=4336) done. status: 1542          
29/10/2013 08:11:54 - end Parent Job; elapsed time: 00:00:35
29/10/2013 08:11:54 - Info bpbrm(pid=3352) start bpfis on client         
29/10/2013 08:11:54 - Info bpbrm(pid=3352) Starting create snapshot processing         
29/10/2013 08:11:56 - Info bpfis(pid=7460) Backup started           
29/10/2013 08:12:07 - Critical bpbrm(pid=3352) from client exch-nodeb: FTL - snapshot processing failed, status 156   
29/10/2013 08:12:07 - Critical bpbrm(pid=3352) from client exch-nodeb: FTL - snapshot creation failed - SNAPSHOT_NOTIFICATION::SnapshotPrepare failed., status 130
29/10/2013 08:12:07 - end writing
Status 130
29/10/2013 08:12:07 - end operation
29/10/2013 08:12:07 - begin Snapshot, Stop On Error
Status 0
29/10/2013 08:12:07 - end Snapshot, Stop On Error; elapsed time: 00:00:00
29/10/2013 08:12:07 - begin Snapshot, Delete Snapshot
29/10/2013 08:12:08 - Warning bpbrm(pid=3352) from client exch-nodeb: WRN - NEW_STREAM0 is not frozen    
29/10/2013 08:12:08 - Warning bpbrm(pid=3352) from client exch-nodeb: WRN - Microsoft Information Store:\SG5 - PF (C2) is not frozen
29/10/2013 08:12:08 - Info bpfis(pid=7460) done. status: 130          
29/10/2013 08:12:08 - end Snapshot, Delete Snapshot; elapsed time: 00:00:01
29/10/2013 08:12:08 - Info bpfis(pid=0) done. status: 130: system error occurred       
29/10/2013 08:12:09 - started process bpbrm (6604)
29/10/2013 08:12:15 - Info bpbrm(pid=6604) Starting delete snapshot processing         
29/10/2013 08:12:15 - Info bpfis(pid=0) Snapshot will not be deleted        
29/10/2013 08:12:17 - Info bpfis(pid=8916) Backup started           
29/10/2013 08:12:17 - Critical bpbrm(pid=6604) from client exch-nodeb: cannot open C:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.egen-mbx02_1383034279.1.0      
29/10/2013 08:12:18 - Info bpfis(pid=8916) done. status: 1542          
29/10/2013 08:12:18 - end operation
29/10/2013 08:12:18 - Info bpfis(pid=0) done. status: 1542: An existing snapshot is no longer valid and cannot be mounted for subsequent operations
29/10/2013 08:12:18 - end writing
Status 1542
29/10/2013 08:12:18 - end operation
29/10/2013 08:12:18 - begin Snapshot, End Notify Script
29/10/2013 08:12:18 - Info RUNCMD(pid=7804) started            
29/10/2013 08:12:18 - Info RUNCMD(pid=7804) exiting with status: 0         
Status 0
29/10/2013 08:12:18 - end Snapshot, End Notify Script; elapsed time: 00:00:00
Status 1542
29/10/2013 08:12:18 - end operation
An existing snapshot is no longer valid and cannot be mounted for subsequent operations(1542)

Check the VSS writer status on the passive node using the command:

vssadmin list writer

It should say “Stable” and “No error” but if it looks like this:

Writer name: 'Microsoft Exchange Writer'
   Writer Id: {76fe1ac4-15f7-4bcd-987e-8e1acb462fb7}
   Writer Instance Id: {b5fc018e-3ae9-4e87-8637-766ed6dde1bb}
   State: [9] Failed
   Last error: Not responding

then there’s your problem.

You should also find the following error in the Application event log:

Log Name:      Application
Source:        VSS
Date:          29/10/2013 08:12:07
Event ID:      12301
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      exch-nodeb.rcmtech.co.uk
Description:
Volume Shadow Copy Service error: Writer Microsoft Exchange Writer did not respond to a GatherWriterStatus call. 

Operation:
   Gather writers' status
   Executing Asynchronous Operation

Context:
   Current State: GatherWriterStatus

Reboot the passive node to clear the VSS error and retry the backups.


VMware E1000e vmnic Windows Server 2012 data corruption

$
0
0

Found this yesterday: http://kb.vmware.com/kb/2058692

“On a Windows 2012 virtual machine using the default e1000e network adapter and running on an ESXi 5.0 or 5.1 host, you experience these symptoms: 

  • Data corruption may occur when copying data over the network. 
  • Data corruption may occur after a network file copy event.”

The E1000e is the default NIC type for Server 2012, though I always use vmxnet3.

I’ve written this PowerCLI / PowerShell script to identify VMs with the E1000e NIC (though it doesn’t check what the OS is):

$ProblemVMs = @{}
$TotalVMs = 0
$Processed = 0
$VMs = Get-VM
foreach($VM in $VMs){$TotalVMs++}
foreach($VM in $VMs){
    $VMNics = Get-NetworkAdapter -VM $VM
    $Processed++
    [int]$PercentComplete = ($Processed/$TotalVMs*100)
    Write-Progress -Activity "Checking VM NICs" -Status "Processing $VM" -PercentComplete $PercentComplete
    foreach($VMNic in $VMNics){
        $VMNicType = $VMNic.ExtensionData.ToString()
        $VMNicType = $VMNicType.Replace("VMware.Vim.Virtual","")
        if($VMNicType -match "e1000e"){
            $ProblemVMs.Add($VM.Name,$VMNicType)
        }
    }
}
Write-Progress -Activity "Checking VM NICs" -Completed
$ProblemVMs | Format-Table -AutoSize

You’ll obviously need to do a Connect-VIServer first as usual for PowerCLI.


Viewing all 192 articles
Browse latest View live