[Updated] Citrix PVS: Sync Local vDisk Store #PowerShell

I’ve reworked the Sync-PvsLocalStore.ps1 script. And from compatibility perspective it supports now vDisks in avhdx as well as avhd format.

Sync-PvsLocalStore.ps1 supports Citrix Provisioning Services Farms w/ local vDisk store. This kind of architecture is not best practice but in the field, especially among the mid-sized companies you’ll find such PVS farms.

The purpose of this script is to copy or rather sync changed and new versions of one or more given vDisks between the local Stores within a Farm of two or more PVS servers. Basically, you can think of the Sync-PvsLocalStore.ps1 as a wrapper for Robocopy.exe /MIR with some extra brains on top. That is because it is able to detect and exclude a Maintenance Version of a vDisk from the copy process, meaning that the script only spreads out the latest Production and Test versions of a vDisk while it doesn’t bloat the stores with Maintenance versions that is work in progress typically.

The usage is very simple. Look at this example:

You need to specify one ‘MasterServer’, one or more ‘MemberServer’, the path of the Store (needs to be the same on each server), one or more vDisk names, the name of the corresponding Site and Store. The two latter params help the script to identify any Maintenance Version.

Sync-PvsLocalStore.ps1 needs to be run on a system where the PVS Console or rather its command-line interface MCLI.EXE is installed. (Although there’s a PowerShell module for PVS, the script utilizes the old school MCLI.EXE in order to support a wider range of PVS versions.)

Please note: The script leverages robocopy’s MIR switch; meaning that it may delete so called EXTRA files on the given member PVS server(s) that are not present in the local vDisk store of the given master PVS server! Therefore, this script makes only sense when you consequently use only one PVS Server for vDisk maintenance. You need to keep that in mind when using this script.

Hope this helps

Splatting To #PowerShell Job

Category : Uncategorized

A while back I found a smart way to pass parameter to scripts in PowerShell background jobs.

Initially I was driven by the ambition to use Splatting and my first attempt looked like this:

It works, but in my view that approach reduces Splatting to absurdity. Therefore I continued trying different approaches and finally ended up with this surprisingly easy result:

If you like to use this approach please take into account:
– You need to pipe a PSCustomObject to Start-Job (rather than a hash table)
– For each parameter the Job script needs ValueFromPipelineByPropertyName set to $true such as below:

Hope this helps

Update Password in Unattend.xml #PowerShell

Category : Uncategorized

Today I share a small PowerShell function that helps to update/change the password in an Unattend.xml file. This might be useful in case you have to meet a strict password changing policy in your company.


The function suppports both, plain-text and, more particularly, base64 encoded passwords. The function reads the given unattend.xml file and alters the value element of the section below:

Please note that the file will overwrite the original file w/o asking for confirmation!

By the way, my Base64 PowerShell Module helps to convert from / to base64 encoded strings.

ShouldProcess in #PowerShell Pipeline

Did you know? You can leverage the $PSCmdlet.ShouldProcess method within a Pipeline as follows:

Although this is a no-brainer for PowerShell pros I never saw usage of $PSCmdlet.ShouldProcess in conjunction with Where-Object.

The example below is a small script with WhatIf support that removes disabled AD users:

Watch out! The script removes AD users.

[Updated] Citrix XenServer 7: Unattended Installation Pitfalls

Citrix XenServer 7.x still supports the previous approach for Network Boot Installation of XenServer hosts that is based on PXE, TFTP, HTTP or NFS, and some post install shell scripting. However, if you adopt your existing XenServer 6.x deployment system you may face some challenges due to changes that I didn’t found explicitly mentioned in the installation guide.

Given that you have experience with unattended XenServer 6.x deployments, I just want to quickly share two major issues that prevented the installation process succeeding:

1. Carefully double-check the files in the pxelinux.cfg directory, especially the append statement. Within that statement there has to be the word “install”. Example (see line 4):

2. Unlike previous versions, XenServer 7.x won’t execute a “XenServer First Boot” shell script in /etc/rc3.d (chmod +x given). Although it will execute a shell script in /etc/init.d that has a proper symbolic link in /etc/rc3.d you should be aware of the fact that XenServer 7 uses systemd for bootup (instead of Sys V style). Therefore I recommend to embrace systemd. For an example see CTX217700)

Please note that systemd might terminate a long-running “XenServer First Boot” script due to exceeding a timeout limit. A simple approach to avoid that is to set TimeoutStartSec to “infinity” as follows:

A rather sophisticated approach to avoid script termination is to set the systemd unit’s type for “XenServer First Boot” to “notify” (in the code above it is “simple”). The notify type binds you to add notification commands to the XenServer First Boot script. systemd-notify seems a proper command for that purpose. But I haven’t tested it as I opted for the infinit timeout approach.

Hope this helps

How To Change The CD/DVD Drive Letter? #Batch

Tags :

Category : Batch

UPDATE Feb 16, 2017: For a PowerShell version that uses the approach below go to RemountCompactDisk.ps1

The cmd.exe command line below uses mountvol.exe in order to delete the old and to set the new drive letter assignment (where E: represents the old and Z: the new drive letter):

The FOR /F command parses a mountvol command that returns the CD-ROM drive’s volume name. The volume name looks like this:

If FOR /F successfully determines a volume name it executes two mountvol commands, the first one deletes the current drive letter assignment the seconds one sets the new drive letter assignment (using the volume name).

#PowerShell Controller Script Template

Category : Uncategorized

When it comes to Windows PowerShell, in most instances, I deliver pretty much complex scripts to bridge the gap between management systems that cannot communicate directly with each other. Usually these script solutions will be hooked into a Whatever Automation Suite that has control over the IT workflows. A mandatory requirement is that the script passes information back to the caller, be it an exit code, a message string, or an API call, you name it. Therefore, I’ve written a PowerShell template for a controller script that already includes the code to deal with multiple logical steps, with exceptions, and passing back corresponding information. I decided to share a simplified edition of that template with the readers of this blog (it’s on GitHub). I consider it a good starting point to build a controller script, that leverages the tools, PowerShell functions, and modules to succeed.

Plain vanilla structure

The template is structured in multiple sections, that is (1) requirements, (2) comment-based help, (3) parameters, (4) private helper functions, (5) initialization, and (6) main (implemented as a huge try-catch-finally).

Initialization region

In this region the script prepares for returning information as follows:

  • Assuming that the script will fail an IsSuccessful variable will be set to $false.
  • A Result hashtable with a nested hashtable for each possible result, that is success, unknown failure, and all well-known failure results.
  • A ScriptResult variable will be initialized as an empty hashtable. At the end it will hold the relevant information for the caller.
  • A ScritResultOnError variable will be set initially to $Result.Unknown. In course of script processing that variable will be updated continuously with corresponding failure information.

Furthermore, in order to catch exceptions, the ErrorActionPreferece variable will be set to Stop which enables termination instead continuation in case of exceptions.

Main region

Basically, the main region is a huge try-catch-finally statement meaning that once a command within the try-ScriptBlock throws an exception, PowerShell will run the catch-ScriptBlock. In any case the finally-ScriptBlock will be run. This is plain vanilla as well.

The try-ScriptBlock, however, is divided into regions. Each logical step in completing the script task is represented by a region. Each region begins with an update for the ScriptResultOnError variable so that it holds corresponding script results just in case of downstream failure or rather exception. The actual commands for the region in question follow afterwards and yet other commands might be necessary to validate outcome. If the outcome doesn’t meet the requirements for the script to succeed, the region exits processing of the try-ScriptBlock with Break. The Break statement will directly lead to the finally-ScriptBlock. (I distinguish between ‘hard’ errors that is exceptions and ‘soft’ errors that is unfulfilled interim results. Hard errors lead to the catch-ScriptBlock. Soft errors don’t.) The final region inside the try-ScriptBlock is in charge to set the above mentioned IsSuccessful variable to $true, if applicable.

The catch-ScriptBlock will be used to log the exception message and the like.

The finally-ScriptBlock, finally, returns the script result to the script caller. In order to pass back correct information, it first sets the ScriptResult variable depending on IsSuccessful‘s value. If it is $true it will hold $Result.Success, if it is $false it will set to $ScriptResultOnError which holds the ‘latest’ failure results (see above). Ultimately, $ScriptResult contains a hashtable with one or more elements to return. The script template contains sample code to terminate script execution with an exit code.

Hope this helps to make to world a better place.

PowerShell Controller Script Template

Base64 Module #PowerShell

Tags :

Category : Windows PowerShell

Every now and then, I need powershell.exe’s EncodedCommand parameter in order to avoid trouble due to characters that interfere with command line processing. EncodedCommand accepts a base-64-encoded string and powershell.exe’s help shows how to get the base-64-representation of a given PowerShell command. But I always forget how to reverse engineer such a string and therefore decided to write two functions, ConvertTo-Base64String and ConvertFrom-Base64String.

I’ve bundled the functions in a PowerShell Module. Please find it in the PowerShell Gallery: Base64

The module adds two ScriptMethods to the System.String type, ToBase64String() and FromBase64String(), and the functions leverage this methods. Thus, you have the following possibilities to convert strings from/to base64:


Updated: Subversion PowerShell Module

In May 2014, I published a PowerShell Module in the TechNet Gallery. It has a five star rating after all based on four votes, LOL. Anyways, I revised that module in order to support pipeline processing and whatif. Furthermore, I decided to publish the Subversion module in the PowerShell Gallery

The Subversion Powershell Module provides a bunch of functions and aliases to work with Subversion/SVN working copies. It requires the Subversion command-line binary svn.exe.

  • Update-SvnWorkingCopy brings the latest changes (HEAD revision) from the repository into an existing working copy.
  • Publish-SvnWorkingCopy sends the changes from your working copy to the repository.
  • Import-SvnUnversionedFilePath commits an unversioned file or directory tree into the repository.
  • New-SvnWorkingCopy checks out a working copy from a repository (HEAD revision).
  • Get-SvnWorkingCopy returns the status of working copy files and directories.
  • Add-SvnWorkingCopyItem puts files and directories under version control, that is scheduling them for addition to repository in next commit.
  • Remove-SvnWorkingCopyItem removes files and directories from version control, that is scheduling them for deletion upon the next commit. (Items that have not been committed are immediately removed from the working copy.)
  • Repair-SvnWorkingCopy fixes a working copy that has been modified by non-svn commands in terms of file addition and removal. The function identifies items that are not under version control and items that are missing. It puts non-versioned items under version control, and it removes missing items from version control (i.e. schedule for deletion upon next commit).
  • Switch-SvnWorkingCopy updates the working copy to a different URL within the same repository.

The functions provide only basic functionality and work fine with the subversion command line client from http://www.collab.net/downloads/subversion

Get Azure VM Status With #PowerShell

The PowerShell cmdlet below, Get-AzureRmVMStatus, helps to you get a list of Azure VMs and their status (PowerState) within a given resource group. You can supply a VM name filter if you want to enclose only specific VMs in the result.

The usage of this function is as simple as…

Of couse, you need to log on to you Azure subscription before. (Login-AzureRmAccount)

The function requires Azure PowerShell Cmdlets. Start here: Azure PowerShell Install Configure