• 0

[Updated] Citrix XenServer 7: Unattended Installation Pitfalls

Citrix XenServer 7.x still supports the previous approach for Network Boot Installation of XenServer hosts that is based on PXE, TFTP, HTTP or NFS, and some post install shell scripting. However, if you adopt your existing XenServer 6.x deployment system you may face some challenges due to changes that I didn’t found explicitly mentioned in the installation guide.

Given that you have experience with unattended XenServer 6.x deployments, I just want to quickly share two major issues that prevented the installation process succeeding:

1. Carefully double-check the files in the pxelinux.cfg directory, especially the append statement. Within that statement there has to be the word “install”. Example (see line 4):

2. Unlike previous versions, XenServer 7.x won’t execute a “XenServer First Boot” shell script in /etc/rc3.d (chmod +x given). Although it will execute a shell script in /etc/init.d that has a proper symbolic link in /etc/rc3.d you should be aware of the fact that XenServer 7 uses systemd for bootup (instead of Sys V style). Therefore I recommend to embrace systemd. For an example see CTX217700)

Please note that systemd might terminate a long-running “XenServer First Boot” script due to exceeding a timeout limit. A simple approach to avoid that is to set TimeoutStartSec to “infinity” as follows:

A rather sophisticated approach to avoid script termination is to set the systemd unit’s type for “XenServer First Boot” to “notify” (in the code above it is “simple”). The notify type binds you to add notification commands to the XenServer First Boot script. systemd-notify seems a proper command for that purpose. But I haven’t tested it as I opted for the infinit timeout approach.

Hope this helps


  • 2

How To Change The CD/DVD Drive Letter? #Batch

Tags :

Category : Batch

UPDATE Feb 16, 2017: For a PowerShell version that uses the approach below go to RemountCompactDisk.ps1

The cmd.exe command line below uses mountvol.exe in order to delete the old and to set the new drive letter assignment (where E: represents the old and Z: the new drive letter):

The FOR /F command parses a mountvol command that returns the CD-ROM drive’s volume name. The volume name looks like this:

If FOR /F successfully determines a volume name it executes two mountvol commands, the first one deletes the current drive letter assignment the seconds one sets the new drive letter assignment (using the volume name).


  • 0

#PowerShell Controller Script Template

Category : Uncategorized

When it comes to Windows PowerShell, in most instances, I deliver pretty much complex scripts to bridge the gap between management systems that cannot communicate directly with each other. Usually these script solutions will be hooked into a Whatever Automation Suite that has control over the IT workflows. A mandatory requirement is that the script passes information back to the caller, be it an exit code, a message string, or an API call, you name it. Therefore, I’ve written a PowerShell template for a controller script that already includes the code to deal with multiple logical steps, with exceptions, and passing back corresponding information. I decided to share a simplified edition of that template with the readers of this blog (it’s on GitHub). I consider it a good starting point to build a controller script, that leverages the tools, PowerShell functions, and modules to succeed.

Plain vanilla structure

The template is structured in multiple sections, that is (1) requirements, (2) comment-based help, (3) parameters, (4) private helper functions, (5) initialization, and (6) main (implemented as a huge try-catch-finally).

Initialization region

In this region the script prepares for returning information as follows:

  • Assuming that the script will fail an IsSuccessful variable will be set to $false.
  • A Result hashtable with a nested hashtable for each possible result, that is success, unknown failure, and all well-known failure results.
  • A ScriptResult variable will be initialized as an empty hashtable. At the end it will hold the relevant information for the caller.
  • A ScritResultOnError variable will be set initially to $Result.Unknown. In course of script processing that variable will be updated continuously with corresponding failure information.

Furthermore, in order to catch exceptions, the ErrorActionPreferece variable will be set to Stop which enables termination instead continuation in case of exceptions.

Main region

Basically, the main region is a huge try-catch-finally statement meaning that once a command within the try-ScriptBlock throws an exception, PowerShell will run the catch-ScriptBlock. In any case the finally-ScriptBlock will be run. This is plain vanilla as well.

The try-ScriptBlock, however, is divided into regions. Each logical step in completing the script task is represented by a region. Each region begins with an update for the ScriptResultOnError variable so that it holds corresponding script results just in case of downstream failure or rather exception. The actual commands for the region in question follow afterwards and yet other commands might be necessary to validate outcome. If the outcome doesn’t meet the requirements for the script to succeed, the region exits processing of the try-ScriptBlock with Break. The Break statement will directly lead to the finally-ScriptBlock. (I distinguish between ‘hard’ errors that is exceptions and ‘soft’ errors that is unfulfilled interim results. Hard errors lead to the catch-ScriptBlock. Soft errors don’t.) The final region inside the try-ScriptBlock is in charge to set the above mentioned IsSuccessful variable to $true, if applicable.

The catch-ScriptBlock will be used to log the exception message and the like.

The finally-ScriptBlock, finally, returns the script result to the script caller. In order to pass back correct information, it first sets the ScriptResult variable depending on IsSuccessful‘s value. If it is $true it will hold $Result.Success, if it is $false it will set to $ScriptResultOnError which holds the ‘latest’ failure results (see above). Ultimately, $ScriptResult contains a hashtable with one or more elements to return. The script template contains sample code to terminate script execution with an exit code.

Hope this helps to make to world a better place.

PowerShell Controller Script Template


  • 0

Base64 Module #PowerShell

Tags :

Category : Windows PowerShell

Every now and then, I need powershell.exe’s EncodedCommand parameter in order to avoid trouble due to characters that interfere with command line processing. EncodedCommand accepts a base-64-encoded string and powershell.exe’s help shows how to get the base-64-representation of a given PowerShell command. But I always forget how to reverse engineer such a string and therefore decided to write two functions, ConvertTo-Base64String and ConvertFrom-Base64String.

I’ve bundled the functions in a PowerShell Module. Please find it in the PowerShell Gallery: Base64

The module adds two ScriptMethods to the System.String type, ToBase64String() and FromBase64String(), and the functions leverage this methods. Thus, you have the following possibilities to convert strings from/to base64:

and


  • 4

Updated: Subversion PowerShell Module

In May 2014, I published a PowerShell Module in the TechNet Gallery. It has a five star rating after all based on four votes, LOL. Anyways, I revised that module in order to support pipeline processing and whatif. Furthermore, I decided to publish the Subversion module in the PowerShell Gallery

The Subversion Powershell Module provides a bunch of functions and aliases to work with Subversion/SVN working copies. It requires the Subversion command-line binary svn.exe.

  • Update-SvnWorkingCopy brings the latest changes (HEAD revision) from the repository into an existing working copy.
  • Publish-SvnWorkingCopy sends the changes from your working copy to the repository.
  • Import-SvnUnversionedFilePath commits an unversioned file or directory tree into the repository.
  • New-SvnWorkingCopy checks out a working copy from a repository (HEAD revision).
  • Get-SvnWorkingCopy returns the status of working copy files and directories.
  • Add-SvnWorkingCopyItem puts files and directories under version control, that is scheduling them for addition to repository in next commit.
  • Remove-SvnWorkingCopyItem removes files and directories from version control, that is scheduling them for deletion upon the next commit. (Items that have not been committed are immediately removed from the working copy.)
  • Repair-SvnWorkingCopy fixes a working copy that has been modified by non-svn commands in terms of file addition and removal. The function identifies items that are not under version control and items that are missing. It puts non-versioned items under version control, and it removes missing items from version control (i.e. schedule for deletion upon next commit).
  • Switch-SvnWorkingCopy updates the working copy to a different URL within the same repository.

The functions provide only basic functionality and work fine with the subversion command line client from http://www.collab.net/downloads/subversion


  • 0

Get Azure VM Status With #PowerShell

The PowerShell cmdlet below, Get-AzureRmVMStatus, helps to you get a list of Azure VMs and their status (PowerState) within a given resource group. You can supply a VM name filter if you want to enclose only specific VMs in the result.

The usage of this function is as simple as…

Of couse, you need to log on to you Azure subscription before. (Login-AzureRmAccount)

The function requires Azure PowerShell Cmdlets. Start here: Azure PowerShell Install Configure

HTH


  • 0

Unattended Azure AD / Office 365 Connect For PowerShell Scripts

In the field, occasionally I stumble over Azure AD or Office 365 support scripts that contain hard coded credentials for the Connect-MsolService cmdlet. This is mainly because these scripts are intended to run regularly in the background and therefore need to establish a connection without user interaction (caused by Get-Credential). With this post I want to draw attention to a smarter approach that eliminates the risk of exposing plain-text passwords in script files.

In fact, saving/restoring credentials to/from file is the perfect use case for Export-CliXml and Import-CliXml. You can pipe any object to Export-CliXml. It creates an XML-representation of the object and saves it in a file. You can re-create the object based on the XML file with Import-CliXml. The best thing about it is that the Export-CliXml cmdlet encrypts credential objects with DPAPI to make sure that only your user account can decrypt the contents of the original credential object.

The following code is inspired by Example 3 of Export-CliXml’s help:

In the code above, the file in which the credential is stored is represented by (‘{0}.credential’ -f $MyInvocation.MyCommand.Name) which resolves to the file name of the script plus the .credential suffix. The file will be saved along with the PowerShell Profile ($profile). If the .credential file exists the code will leverage Import-CliXml to restore the credential object, if not it will invoke Get-Credential and save the credentials with Export-CliXml. In either case the credential variable exposes the credential object.

Please note anyway: Generally you should avoid storing credentials in plain-text files. Opt for this approach only if there’s no better alternative.

Hope this helps


  • 2

How to integrate PowerShell ISE with Service Management Automation

The other day I was checking out the Emulated Automation Activities module that, according to its author Joe Levy, “provides a PowerShell ISE-friendly implementation of all the SMA-only activities, using the SMA cmdlets behind the scenes”. The module works fine but in case of nested runbooks you would have to develop a corresponding emulation command for each inline call in order to test outside of SMA. As to me, the bottom line is that EmulatedAutomationActivities is fine for developing and testing child runbooks separately with ISE and as far as parent runbooks are concerned I opt for testing within SMA.

To be able to quickly upload a finished runbook definition to SMA (in my evaluation lab) and load an existing runbook definition into ISE I created two ISE Add-on menu items:

ise-sma-addon

Both options require the SMA PowerShell Module.

The “upload current file …” option requires a common PowerShell file with a runbook definition in the current tab and considers the file name as runbook name. If the runbook name already exists in SMA it transfers the file as a new draft for the correspondig runbook (it overwrites an existing draft). If the runbook name doesn’t exist it simply imports the file into SMA.

The “load runbook …” option opens a list of all current runbooks in a gridview window. After selecting the runbook in question and klicking OK it will open the runbook definition in a new ISE tab.

ise-sma-addon2

ise-sma-addon3

And here comes the code:

Please note that, with regard to production environments and continuous integration, the information contained in this post is only suitable to a limited extent. With this post I just want to provide some starting points.

Hope this helps

Disclaimer: I hope that the information in this post is valuable to you. Your use of the information contained in this post, however, is at your sole risk. All information on this post is provided “as is”, without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by me. Further, I shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages


  • 0

Default WebServiceEndpoint value for SMA Cmdlets

When using the Cmdlets of the Service Management Automation (SMA) PowerShell module, all actions are targeted against a SMA Web Service and therefore have a required parameter called WebServiceEndpoint. If you’re kinda stressed out by repetitively typing this parameter value, you can define a default parameter value in your Windows PowerShell session to set the value automatically. For example, in your PowerShell profile script, use the following command to set the default value for the WebServiceEndpoint parameter for all related SMA Cmdlets:

Refer to “about_Parameters_Default_Values” in order to get more information about the $PSDefaultParameterValues built-in variable.

Hope this helps


  • 0

PowerShell Profile In Da Cloud

Tags :

Category : Windows PowerShell

If you want to keep the same PowerShell Profile on more than one Windows computer, how about transfering the relevant common parts of the profile to a file share or even the Cloud? Actually, that’s a no-brainer!

Below, I outline my approach with a few brief strokes.

1. Identify the parts of both the console and the ISE profile that you want to share with all computers.
2. Create a “Documents\WindowsPowerShell” folder in the root of your (cloud) storage mount point.
3. Within that folder, save the profile code related to console sessions as “Microsoft.PowerShell_profile.ps1” and the ISE profile code as “Microsoft.PowerShellISE_profile.ps1”
4. Replace the “outsourced” profile code from the local profile scripts with a reference to their cloud-based equivalents:

Apart from dot-sourcing an existing external profile script, the above code initiales a global CLOUDPROFILE variable with the full file name of the cloud-based profile script. Thus it’s very easy to access that file for editing purposes or so.

The next code snippet works with ISE only and enables you to skip profile loading by holding down the left CTRL key:

Hope this helps