Yet Another Invoke SQL PowerShell Script

A few weeks ago, I uploaded my PowerShell function Invoke-SQL to the Microsoft TechNet Gallery and I forgot to mention it here.

Invoke-SQL is designed to issue any valid SQL command text against an ODBC database connection. The function either accepts an existing OdbcConnection object or a connection string in order to create a connection on the fly. By default the function returns $true if the execution of the given SQL statement didn’t fail. With the PassThru switch the function loads the results into a DataTable object and returns it for further processing. Invoke-SQL returns nothing on error opening the ODBC connection or executing the SQL command text.

The Traditional IT Department – Your Business’ Blind Spot

Hey, CEO!

Is your organization still pursuing a non-Cloud strategy? If I’d ask you why I bet you won’t be stuck for an answer. You or your CIO would tell me that Cloud Computing doesn’t meet your requirements in terms of security for example. It’s your valid decision and that’s fine by me. But may I ask another question? Do you apply exactly the same standards you used to define your Cloud investigation criteria for your current IT operational concept? Really?

So, let’s stick at security as I guess it’s one of your main concerns regarding Cloud. Usually, Cloud security concerns cover all aspects related to a Cloud Reference model. Mostly the Cloud Provider has to undertake that the IT Infrastructure is secure and that the tenants’ data are protected. In order to ensure this demand on security the Cloud Provider has to implement several defensive controls that detect and prevent attacks and also reduce the impact of attacks. It’s about reducing the overall attack surface and Cloud Providers need to be pretty good in this discipline – not least because they are constantly in the public eye. Cloud Providers who want to continue to exist have to face up to each security concern.

Now I ask you again. Does your traditional IT meet the same level of security that you have set to evaluate Cloud Computing or do you have double standards? I see, you have firewalls, backup, desaster recovery, antivirus, data encryption and so on – so why bother. I’ll tell you why bother. All these security thingies are firstly just tools and guidelines. But did you ever consider who operates this? Of course your IT department, or spin-off, maybe assisted by external workers. But do you really know what they do or do you rather implicitly trust them? In the latter case the IT department is in a blind spot from business perspective. Quite foggy, right? Fog… Cloud… Frankly speaking you should consider your IT department as a separate attack surface, perhaps it’s the weakest link in your security strategy.

First of all, in order to reduce this risk you should get in touch with your “IT crowd”, not just the CIO. Your business relies on these gals and guys. They are in a key position to proverbially shutdown your business. Listen to them carefully, be thankful and be willing to reward them. Maybe you’ll realize that you need a change in your organization’s culture if you will. Go ahead! Invoke a cultural movement driven by the management. At the end of the day it should be possible for any person to give any person a bit of one’s mind regardless of the hierarchy or command structure, because exactly the opposite leads to vulnerability. Think it over.

From a technical perspective, ironically, your IT department can benefit from the lessons learned in Cloud Computing. Here’s an example. Since this blog is mainly about Windows PowerShell I take the liberty and draw your attention to Just Enough Administration (JEA) (Download Whitepaper). It’s based on technology you should already have in place and helps your organization “reduce risk by restricting operators to only the access required to perform specific tasks”.

Regards
Frank Peter Schultze

Updated: PowerShell Subversion Module

Yes, I know, these days Git is king of the hill. Anyways, I shared my PowerShell module for Subversion at PoshCode.org (see below) Microsoft TechNet Gallery. The module exposes a bunch of functions and aliases:

  • The function Update-SvnWorkingCopy is a wrapper for “svn.exe update” and brings the latest changes (HEAD revision) from the repository into an existing working copy.
  • The function Publish-SvnWorkingCopy is a wrapper for “svn.exe commit” and sends the changes from your working copy to the repository.
  • The function Import-SvnUnversionedFilePath is a wrapper for “svn.exe import” and commits an unversioned file or directory tree into the repository.
  • The function New-SvnWorkingCopy is a wrapper for “svn.exe checkout” and checks out a working copy from a repository (HEAD revision).
  • The function Get-SvnWorkingCopy is a wrapper for “svn.exe status” and returns the status of working copy files and directories
  • The function Add-SvnWorkingCopyItem is a wrapper for “svn.exe add” and puts files and directories under version control, that is scheduling them for addition to repository in next commit.
  • The function Remove-SvnWorkingCopyItem is a wrapper for “svn.exe delete” and removes files and directories from version control, this is scheduling them for deletion upon the next commit. (Items that have not been committed are immediately removed from the working copy.)
  • The function Repair-SvnWorkingCopy fixes a working copy that has been modified by non-svn commands in terms of file addition and removal. The function identifies items that are not under version control and items that are missing. It puts non-versioned items under version control, and it removes missing items from version control (i.e. schedule for deletion upon next commit).

Furthermore, it alters PowerShell’s Prompt function in order to display some information about the state of a SVN working copy

Notes on PowerShell Summit NA 2014 #PSHSummit

Fortunately I was able to attend the community-driven event PowerShell Summit 2014 at the Meydenbauer Center in Bellevue. I’m back at home and really glad to see the back of 20+ hrs travel time. Before the jet lag kicks in again, I want to share impressions and notes with you…

If you like Windows PowerShell and normally no one understands what the heck you’re talking about the PSHSummit is the right place for you. It’s like being for three days in the epicenter of PowerShell knowledge. It’s about both the great sessions and the chance to meet up with people you may know from twitter, blogs, including some members of the PowerShell Team and not least the godfather of PowerShell Jeffrey Snover. So, if you ever wanted to get in touch with PowerShell-minded folks, go ahead and check PowerShell.org for details about the the next PSHSummit. By the way, PowerShell.org has a great monthly marketing-free newsletter.

The bottom line from my perspective is that PowerShell, although mature, is still about to revolutionize how Microsoft-centric IT infrastructures are built, configured and (securely) administered in future. J.Snover kicked the event off with a session on “JitJea. Just In Time/Just Enough Admin. A PowerShell toolkit to secure a post-Snowden world.” – from the high level perspective the JitJea approach enables admins to perform functions at a given timeframe without giving them admin privileges directly. The PowerShell Team showed in some lightning demos that they put huge effort in evolving DSC into an agile operations or DevOps toolkit including cross platform support. It was a great moment to see DSC in action how it configured a shell profile on a CentOS machine. For those who were asking why DSC leverages OMI, MOF and these strange things: now you have the answer. PowerShell goes Open – slowly but surely!

5101vg+Jd9L

Book Review: Windows PowerShell 4.0 for .NET Developers

The tech book publisher Packt asked me to review Sherif Talaat‘s book “Windows PowerShell 4.0 for .NET Developers“, subtitled “A fast-paced PowerShell guide, enabling you to efficiently administer and maintain your development environment“. According to their own statement Packt’s “books focus on practicality, recognising that readers are ultimately concerned with getting the job done“. The book is available for purchase on www.packtpub.com.

To put it in a nutshell, from my perspective Sherif delivers exactly what the book’s subtitle and Packt Publishing promises: it’s a well-made balancing act between being fast-paced, easy-to-follow and providing the reader (that is a .NET developer) with essential information on how to leverage PowerShell.

This book is like distilled water: it delivers pure information in order to take the reader from “101″ to a professional level in PowerShell. It’s about 140 pages only. That’s definitely no huge tome and therefore it’s self-evident that it lacks of deeper background information, trivia, and cleverly thought out examples that unveil the proverbial Power of PowerShell. But, whenever useful the author cross-referenced the book to resources with further information. Ideally, the reader already has some basic scripting knowledge and hands-on experience with one of the .NET programming languages – not least because the book is aimed at .NET developers who want to learn how to use PowerShell.

There’s only five chapters:

  • Chapter 1 — Getting Started with Windows PowerShell — covers the basic stuff like the PowerShell console, PowerShell ISE, key features & concepts, and fundamentals like the object pipeline, aliases, variables, data types, operators, arrays, hash tables, script flow, providers, drives, comments, and parameters.
  • Chapter 2 — Unleashing Your Development Skills with PowerShell — is about working with CIM/WMI, XML, COM, .NET objects, PowerShell Modules and about script debugging/error handling.
  • Chapter 3 — PowerShell for Your Daily Administration Tasks — covers PowerShell Remoting, PowerShell Workflows, and PowerShell “in action” (from a .NET developers perspective) that is managing Windows roles & features, managing local users & groups, managing IIS, and managing MS SQL.
  • Chapter 4 — PowerShell and Web Technologies — is about working with web services & requests, RESTful APIs, and JSON.
  • Chapter 5 — PowerShell and Team Foundation Server — covers the TFS cmdlets and shows how to get started and work with them.

Did I miss something? Yes, there’s no information about Desired State Configuration. But that’s as far as it goes.

Considering the fact that we are about to enter a new era in IT where developers and operators need to work closely together (or in one person) to continuously deliver services in automated IT infrastructures, a .NET developer should at least get a copy of this book in order to be prepared for the best ;-)

How to quickly fire up an iTunes playlist with PowerShell

With the PowerShell script below you can quickly start playing an iTunes playlist.

Preparation steps:
1. buy a device from the dark side (ha ha), install iTunes on your Windows computer
2. buy/import music and organize the tracks in playlists

Usage:
Let’s say you’ve prepared a playlist called ‘Blues Rock till the cow comes home’ in an iTunes library called ‘Mediathek’ and you want it to play in shuffle mode. Open a PowerShell command window and type:

C:\PS> .\Start-PlayList.ps1 -Source 'Mediathek' -Playlist 'Blues Rock till the cow comes home' -Shuffle

The iTunes application will open automagically and start playing tracks – and you can party till the cow comes…

<#
.SYNOPSIS
    Plays an iTunes playlist.
.DESCRIPTION
    Opens the Apple iTunes application and starts playing the given iTunes playlist.
.PARAMETER  Source
    Identifies the name of the source.
.PARAMETER  Playlist
    Identifies the name of the playlist
.PARAMETER  Shuffle
    Turns shuffle on (else don't care).
.EXAMPLE
   C:\PS> .\Start-PlayList.ps1 -Source 'Library' -Playlist 'Party'
.INPUTS
   None
.OUTPUTS
   None
#>
[CmdletBinding()]
param (
    [Parameter(Mandatory=$true)]
    $Source
    ,
    [Parameter(Mandatory=$true)]
    $Playlist
    ,
    [Switch]
    $Shuffle
)

try {
    $iTunes = New-Object -ComObject iTunes.Application
}
catch {
    Write-Error 'Download and install Apple iTunes'
    return
}

$src = $iTunes.Sources | Where-Object {$_.Name -eq $Source}
if (!$src) {
    Write-Error "Unknown source - $Source"
    return
}

$ply = $src.Playlists | Where-Object {$_.Name -eq $Playlist}
if (!$ply) {
    Write-Error "Unknown playlist - $Playlist"
    return
}

if ($Shuffle) {
    if (!$ply.Shuffle) {
        $ply.Shuffle = $true
    }
}

$ply.PlayFirstTrack()

[System.Runtime.InteropServices.Marshal]::ReleaseComObject([System.__ComObject]$iTunes) > $null
[GC]::Collect()

Get-Up, Get-IntoIt, Get-Involved…

Hello, World!

OMG, I haven’t posted anything since June 2013 when I was so enthusiastic about the upcoming DSC feature in PowerShell 4. What happened meanwhile?

Get-Up

In these days I’m feeling that we’re facing a new era in IT Consulting business. Here in Germany, a growing number of companies requests automation solutions. For example, automation is a core requirement in call for bids increasingly. Folks, I think we’re just about to enter the Golden Age of IT Automation. Finally! Automation will be commodity. I am so excited I just can’t hide it.

Get-IntoIt

Why I am so happy? Since the mid-nineties when I’ve entered the IT business I’m an automation guy. I learned to get the most out of batch files, excessively leveraged other scripting languages like VBScript and tools in order to design automation frameworks for several purposes. After I’d changed into the IT consulting business and didn’t lose my affinity for automation. As from 2008, I was part of a team that built a PowerShell-based configuration management framework. With this framework we were able to help huge enterprises to fully automate the installation and configuration of their Citrix farms, for example. It kept hundreds of servers in their defined state. Furthermore it separated the business logic from configuration logic. Thus, it was easy to build and maintain identical environments for Test, UAT, and Prod for example. Pretty similar to Chef or Puppet, but for “the other OS” that is Windows ;-)

Get-Involved, Get-Involved, Get-Involved…

And now? Nowadays, the core for such a framework is directly built-in into PowerShell and called Desired State Configuration. This is so cool. Just built your solution around DSC.

There’s more to come…

Pit

P.S.: The title of this post is freely adapted from James Brown (RIP)

Start the Windows Update Service Depending on Citrix PVS Disk Mode

A Microsoft Windows installation that is properly optimized to run from a Citrix PVS vDisk in standard mode won’t start the Windows Update Service and a couple of other background services that alter the system. It makes no sense to start these services because PVS redirects any write access to the write cache that is known to be volatile. When it comes to maintenance the system needs to be started from the vDisk in a writeable state (that is nowadays a maintenance version/vhd snapshot of the vDisk). In order to let the system pull updates from Microsoft WSUS the Windows Update service needs to be configured to start. Following the maintenance tasks, as part of re-sealing the vDisk, the service will be disabled again through the PVS TargetOSOptimizer.exe utility.

How about a startup script that configures and starts the Windows Update service automatically dependent on the vDisk mode? Here we go:

@ECHO OFF
SETLOCAL
SET PrivateOrMaintenance=
FOR /F %%i IN (
  '"%ProgramFiles%\Citrix\Provisioning Services\GetPersonality.exe" $WriteCacheType /o'
) DO (
  IF %%i EQU 0 SET PrivateOrMaintenance=Y
)
IF NOT DEFINED PrivateOrMaintenance GOTO :END
sc.exe qc wuauserv | find.exe "START_TYPE" | find.exe "DISABLED" && (
  sc.exe config wuauserv start= auto
  sc.exe start wuauserv
)
REM
REM Add more here...
REM
:END
ENDLOCAL
EXIT /B

So, what happens here? Basically, the batch file uses the so-called personality data. In course of booting from vDisk PVS injects these data to a file called Personality.ini in the root directory of the vDisk file system. The script leverages a command line tool called GetPersonality.exe to retrieve the value for $WriteCacheType. A value of 0 indicates that the vDisk is writeable (private mode or maintenance version) and, thus, the script configures the Windows Update service to start automatically and starts it. Additional read: Managing Target Device Personality.

How to use? Save the script as a batchfile on the vDisk, for example in a scripts folder. Configure it to run on system startup (Windows task scheduler, LGPO, whatever).

Disclaimer: I hope that the information in this post is valuable to you. Your use of the information contained in this post, however, is at your sole risk. All information on this post is provided “as is”, without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by me. Further, I shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages.

Convert Citrix PVS Mcli-Get Output To Objects

Back in late 2009, I wrote a series of posts about the Citrix Provisioning Services’ PowerShell Snapin. 3,5 years later, even within the latest version of PVS the cmdlets return structured text output instead of “real” objects. I’m still hoping that Citrix will provide us a PVS Module/Snapin that follows the common PowerShell standards.

Whatever, today I want to share a generalized version of my function that converts a text array (Mcli-Get output) to PowerShell/.NET objects. For more background information and explanation how the function below works read my former blog post Citrix Provisioning Services 5.1’s PowerShell Interface, Part III

function ConvertTo-PvsObject
{
    <#
    .SYNOPSIS
        Converts the output of Mcli-Get from text array to regular objects with properties.

    .DESCRIPTION
        The Citrix Provisioning Services cmdlets return text arrays instead of .NET objects.
        This function takes the output of a given Mcli-Get command and turns it into
        "PVS objects" with properties.

    .PARAMETER InputObject
        The output of a Mcli-Get command

    .EXAMPLE
        PS C:\> Mcli-Get DiskInfo | ConvertTo-PvsObject

    .EXAMPLE
        PS C:\> $diskinfo = Mcli-Get DiskInfo
        PS C:\> ConvertTo-PvsObject $diskinfo
    #>

    [cmdletBinding(SupportsShouldProcess=$False)]
    param (
        [Parameter(Mandatory=$true, Position=0, ValueFromPipeline=$true)]
        $InputObject
    )

    process {
        switch -regex ($InputObject) {
            "^Record\s#\d+$" {
                if ($record.Count) {
                    New-Object PSObject -Property $record
                }
                $record = @{}
            }
            "^\s{2}(?<Name>\w+):\s(?<Value>.*)" {
                $record.Add($Matches.Name, $Matches.Value)
            }
        }
    }

    end {
        if ($record.Count) {
            New-Object PSObject -Property $record
        }
    }
}

Disclaimer: I hope that the information in this post is valuable to you. Your use of the information contained in this post, however, is at your sole risk. All information on this post is provided “as is”, without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by me. Further, I shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages.