• 0

Get Azure VM Status With #PowerShell

The PowerShell cmdlet below, Get-AzureRmVMStatus, helps to you get a list of Azure VMs and their status (PowerState) within a given resource group. You can supply a VM name filter if you want to enclose only specific VMs in the result.

The usage of this function is as simple as…

Of couse, you need to log on to you Azure subscription before. (Login-AzureRmAccount)

The function requires Azure PowerShell Cmdlets. Start here: Azure PowerShell Install Configure

HTH


  • 0

Unattended Azure AD / Office 365 Connect For PowerShell Scripts

In the field, occasionally I stumble over Azure AD or Office 365 support scripts that contain hard coded credentials for the Connect-MsolService cmdlet. This is mainly because these scripts are intended to run regularly in the background and therefore need to establish a connection without user interaction (caused by Get-Credential). With this post I want to draw attention to a smarter approach that eliminates the risk of exposing plain-text passwords in script files.

In fact, saving/restoring credentials to/from file is the perfect use case for Export-CliXml and Import-CliXml. You can pipe any object to Export-CliXml. It creates an XML-representation of the object and saves it in a file. You can re-create the object based on the XML file with Import-CliXml. The best thing about it is that the Export-CliXml cmdlet encrypts credential objects with DPAPI to make sure that only your user account can decrypt the contents of the original credential object.

The following code is inspired by Example 3 of Export-CliXml’s help:

In the code above, the file in which the credential is stored is represented by (‘{0}.credential’ -f $MyInvocation.MyCommand.Name) which resolves to the file name of the script plus the .credential suffix. The file will be saved along with the PowerShell Profile ($profile). If the .credential file exists the code will leverage Import-CliXml to restore the credential object, if not it will invoke Get-Credential and save the credentials with Export-CliXml. In either case the credential variable exposes the credential object.

Please note anyway: Generally you should avoid storing credentials in plain-text files. Opt for this approach only if there’s no better alternative.

Hope this helps


  • 0

How to integrate PowerShell ISE with Service Management Automation

The other day I was checking out the Emulated Automation Activities module that, according to its author Joe Levy, “provides a PowerShell ISE-friendly implementation of all the SMA-only activities, using the SMA cmdlets behind the scenes”. The module works fine but in case of nested runbooks you would have to develop a corresponding emulation command for each inline call in order to test outside of SMA. As to me, the bottom line is that EmulatedAutomationActivities is fine for developing and testing child runbooks separately with ISE and as far as parent runbooks are concerned I opt for testing within SMA.

To be able to quickly upload a finished runbook definition to SMA (in my evaluation lab) and load an existing runbook definition into ISE I created two ISE Add-on menu items:

ise-sma-addon

Both options require the SMA PowerShell Module.

The “upload current file …” option requires a common PowerShell file with a runbook definition in the current tab and considers the file name as runbook name. If the runbook name already exists in SMA it transfers the file as a new draft for the correspondig runbook (it overwrites an existing draft). If the runbook name doesn’t exist it simply imports the file into SMA.

The “load runbook …” option opens a list of all current runbooks in a gridview window. After selecting the runbook in question and klicking OK it will open the runbook definition in a new ISE tab.

ise-sma-addon2

ise-sma-addon3

And here comes the code:

Please note that, with regard to production environments and continuous integration, the information contained in this post is only suitable to a limited extent. With this post I just want to provide some starting points.

Hope this helps

Disclaimer: I hope that the information in this post is valuable to you. Your use of the information contained in this post, however, is at your sole risk. All information on this post is provided “as is”, without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by me. Further, I shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages


  • 0

Default WebServiceEndpoint value for SMA Cmdlets

When using the Cmdlets of the Service Management Automation (SMA) PowerShell module, all actions are targeted against a SMA Web Service and therefore have a required parameter called WebServiceEndpoint. If you’re kinda stressed out by repetitively typing this parameter value, you can define a default parameter value in your Windows PowerShell session to set the value automatically. For example, in your PowerShell profile script, use the following command to set the default value for the WebServiceEndpoint parameter for all related SMA Cmdlets:

Refer to “about_Parameters_Default_Values” in order to get more information about the $PSDefaultParameterValues built-in variable.

Hope this helps


  • 0

PowerShell Profile In Da Cloud

Tags :

Category : Windows PowerShell

If you want to keep the same PowerShell Profile on more than one Windows computer, how about transfering the relevant common parts of the profile to a file share or even the Cloud? Actually, that’s a no-brainer!

Below, I outline my approach with a few brief strokes.

1. Identify the parts of both the console and the ISE profile that you want to share with all computers.
2. Create a “Documents\WindowsPowerShell” folder in the root of your (cloud) storage mount point.
3. Within that folder, save the profile code related to console sessions as “Microsoft.PowerShell_profile.ps1” and the ISE profile code as “Microsoft.PowerShellISE_profile.ps1”
4. Replace the “outsourced” profile code from the local profile scripts with a reference to their cloud-based equivalents:

Apart from dot-sourcing an existing external profile script, the above code initiales a global CLOUDPROFILE variable with the full file name of the cloud-based profile script. Thus it’s very easy to access that file for editing purposes or so.

The next code snippet works with ISE only and enables you to skip profile loading by holding down the left CTRL key:

Hope this helps


  • 0

ISESteroids 2.0 Random Features

Tags :

Category : Tool , Windows PowerShell

A few months ago, I wrote only a short note about the (at that time) upcoming release of the PowerShell module ISESteroids. More precisely, ISESteroids is an Add-On for PowerShell ISE by PowerShell MVP Tobias Weltner. This time I’ll highlight some randomly chosen features.

First start

Ok, that is not really a feature. You can start ISESteroids by entering Start-Steroids. Owing to the Module Auto-Loading feature, this will load the ISESteroids module. But, mind the “sensitivity” of Module Auto-Loading! It already loads a module behind the scenes as soon as you “touch” it with Get-Command, Get-Help and Tab Expansion. So be prepared that ISESteroids loads when you invoke commands like Get-Command -Module ISESteroids, Get-Help Start-Steroids, and such…

Expert Level

In a nutshell, ISESteroids‘ objective is to assist you in writing better PowerShell code more quickly. As ISESteroids is so packed with features, however, it could twist that aim right around and tend to confuse especially beginners. If you start ISESteroids for the first time it will ask for the Expert Level. Whatever you’ll choose there, you can it later set to a more appropriate level by choosing the corresponding option the “Expert Level” menu.

Screenshot: Selecting the Expert Level
ise2-expertlevel

Snippets

Yes, the concept of code snippets is already included in ISE. It’s a bit halfhearted, though. ISESteroids uses ISE’s snippet mechanism to the advantage it deserves. If you press the default snippet key sequence STRG-J ISESteroids opens the snippet selector. First surprise: The snippets are organized in folders. Selecting a folder leads to corresponding code snippets. (Btw, with the backspace key you move up in the folder structure.) Second suprise: The snippet selector allows for adding new folders and new snippets.

Screenshot: Selecting a code snippet
ise2-snippets

Screenshot: Snippet Manager
ise2-snippetmanager

Compatibility

If you deal with the latest PowerShell version only, be happy in your bubble. My field experience differs regarding the PowerShell versions my scripts have to support. PowerShell 1.0 really faded away meanwhile, but I still stumble over version 2.0 for example. ISESteroids helps you to handle or rather to prevent compatibility issues by marking code that isn’t compatible to the targeted PowerShell version. So, be sure to check the appropriate option in the “Compatibility” menu. Apart from version-related compatibility ISESteroids can mark commands that are not shipping with PowerShell default, thus you have a sort of visual reminder of your script/code requirements.

Screenshot: Selecting compatibility
ise2-compatibility

Risk Management

While you’re scripting, ISESteroids by default checks/analyzes your code against pre-defined risks. On potential risk detection you’ll be notified without attracting attention meaning that ISESteroids will change the risk status indicator from green to yellow or red. Keep an eye on that indicator. You’ll find it in ISE’s status bar and you can click it to enable/disalbe Autochecking, to approve the script, etc. Furthermore, there’s an option to manage black/white lists. (Note that the trail version doesn’t allow you to edit the pre-defined rules.)

Screenshot: Risk Management settings
ise2-riskanalyzer

Screenshot: Risk assessment result
ise2-risk

ScriptMap

Starting from say 100 lines of code you find yourself scrolling more and more. If you deal with rather huge scripts on a regular basis you’ll definitely like ISESteroids‘ ScriptMap feature. Turned on, it will show a preview of the entire script. If you move the mouse pointer over that ScriptMap area it will act as a reading-glass that helps you to identify the code region in question. A single click will navigate to the chosen code region.

Screenshot: Navigating with ScriptMap
ise2-scriptmap

Navigating to function definition/references back and forth

Apart from ScriptMap ISESteroids has more help to offer regarding navigation within (huge) script files. Above the definition of a function ISESteroids displays the number of references to this function (within the same file).

Clicking on this information will navigate to the references:
ise2-functionreferences

If you want to navigate (back) to the function definition, right-click on the reference to function in question on choose “Go To Definition”:
ise2-functiondefinition

CloneView and split screen

Again, if you regularly deal with larger scripts ISESteroids helps you to minimize the ongoing efforts to navigate back and forth in the code. CloneView displays the current editor in a detached external window. Just right-click anywhere within the editor you want to clone and choose “Open CloneView”. The split screen feature divides the current editor in two sections, thus you’re able to simultaneously view/work on different sections of a single script.

Screenshot: Splitted editor window
ise2-splitscreen

Navigation Bar

OMG, yet another about navigating huge scripts? Yes and far more than that. If you turn on the Navigation Bar, at first sight you can both search text and instantly navigate to any function within the loaded script by selecting a function from the list.

Screenshot: Selecting a function to navigate to from the Navigator Bar
ise2-navi

Beyond that, the Navigation Bar…

  • offers access to a couple of snippets and templates,
  • enables you to create a function from selected code,
  • and enables you to export a selected function to a new/existing PowerShell module

Screenshot: Selecting snipptes and templates from the Navigation Bar
ise2-navi2

Screenshot: Create a function from selected code from the Navigation Bar
ise2-navi3

Screenshot: Export a selected function to a new PowerShell module from the Navigation Bar
ise2-navi4

File Version History

To come to an end, ISESteroids has a rather casual file versioning feature. For those who care about version control but don’t want to get worked up over git, svn, csv, tfs, etc. ISESteroids can keep a file version history for a given file. (Behind the scenes it maintains a zip archive with all the past major and minor versions.)

Screenshot: File version history feature
ise2-versioning

Bottom line: Give it a try! powertheshell.com


  • 0

Scripting Street Knowledge

Category : Scripting Technique

Whenever it comes to building a scripted IT Automation solution that goes beyond the scope of a few commands in a single script file, you need more than sufficient technical knowledge and scripting skills: apart from that you need to approach the task in a way that helps you to resolve complexity and to plan ahead. With this post I want to raise your awareness for some General Principles for the Design of Scripted IT Automation Solutions that help you to master the situation. Here we go…

Nip it in the bud!

Don’t underestimate the beginnings. A quick-and-dirty approach isn’t a bad way per se, meaning that up to a certain point it’s a good way to go (at least from benefit-cost perspective). Exceeding that “certain point”, though, at worst could lead to chaos: If you follow the quick-and-dirty approach you’ll succeed to some degree and end up fixing/updating, adapting, flanging new features and partly rewriting your solution over time. The day will dawn when you want to rebuild the entire solution from scratch as it has evolved into something hardly manageable. It’s difficult to walk in mud, so to speak. As things usually turn out there’s no time/budget for tasks like this, thus you must drink as you have brewed.

And the moral of this story, never ever default to quick-and-dirty to begin with. It’s very, possibly even the most important to mind the early stages because later there will be hardly more advantageous moments to put things (back) on the right track. So, be sure to make time for thinking/planning, and through the end! Else you might draw the short straw.

Whenever you’re confronted with a new scripting challenge, howsoever minor it seems, ask yourself:

  1. Is the requested solution a feature of the scripting language?
  2. Have co-workers or I ever created a solution like the requested one?
  3. Could co-workers or I eventually re-use the solution or parts thereof?

Don’t get me wrong regarding the first question, but quite often people tend to rebuild existing commands or features due to lack of knowledge or preparation time. So, be sure to make time for figuring out that you’re really dealing with something nonexistent!

If the answer to the second question is “Yes” you definitely should figure out whether it’s worth the effort to adopt and adapt that solution. Here, likewise, the point is that you don’t want to reinvent the wheel.

If the answer to the third question is “Yes” you should opt for quick-and-dirty by no means, meaning that you should design the solution for repeated application or rather reusability.

As time goes on, you’ll get used to re-use your own work as partial solutions over and over again. At the end of the day you’ll realize that quick-and-dirty hardly ever would have been a suitable approach.

Apart from the above mentioned questions you should try to get to the bottom of the requested automation solution. Too often it turns out that an original request just covered the tip of the iceberg rather than the big picture. The big picture is exactly what you need, thus be prepared to clarify that. Ideally, you carry out a scoping date to get the big picture and write down a scope statement that exactly defines the requested scripting solution.

Divide and conquer

Facing a more sophisticated scripting challenge gives rise to the question how to tackle that task from scripting perspective. How to handle complexity? In computer science, there is an algorithm design paradigm called “divide and conquer” (D&C) that works by recursively breaking down a given problem into sub-problems, solving the (plain) sub-problems and combining these to solve the original (complex) problem. I highly recommend you to adopt the D&C approach as general scripting principle, in other words you break down a scripting challenge into as many separate scripting tasks as possible; after that you solve these scripting tasks individually; and finally you combine these partial, self-contained solutions to a complete solution. To put it another way, write for each single task a function; organize/bundle functions in libraries/modules; leverage these functions to solve the problem.

The basis for making many sub-solutions work together as a good “working team” is consistency and micromanagement if you will. Essentially, it’s about establishing a scripting framework that allows for and ensures information flow between the individual tasks. You need to define a set of rules, let’s call it a scripting policy that covers important details such as:

  • How functions deal with input
  • How functions pass back results (output)
  • How functions handle errors
  • How functions support testing and debugging/troubleshooting scenarios
  • Logging
  • Naming conventions
  • You name it

Again, avoid to reinvent the wheel! Be sure to check what your preferred scripting language has to offer with regard to your scripting policy and leverage this features. Blueprint a mandatory function template that incorporates all your rules and use it consequently to embed the effective “payload script code” safely within your scripting framework.

I must admit though that establishing such a scripting framework for the divide and conquer approach will blow up your solution. You need to do it anyways. However, you can keep it within reasonable limits if don’t overdo things!

Expect Failure

While preparing a dish experienced chefs are distinguished from amateur cooks by frequent tasting. Put simply: the pros expect failure and therefore continuously taste to identify a need for improvement/adjustment as quickly as possible. They retain total control. I highly recommend you to adopt the chef’s approach as another general scripting principle, in other words leave nothing to chance.

Given that, according the D&C principle, you write a function for each individual task you should do this in fear of losing control if you will. It’s about micromanagement! Mind each damned detail and ask yourself what could go wrong with it. No twilight zones allowed. Validate incoming information, test for connectivity, whatever. Got it? Better double-check each detail than rely on a fortunate series of events.

Always script with testing and debugging scenarios in mind. Take care to be one step ahead and insert by default some debug messages to put out parameter values that were given to a function. Someday, trivialities like these will make your (or a co-worker’s) day.

Design for Change

This one is about separating the data from logic. Never ever mingle logic and data. This is essentially the key to flexibility and instant reusability. Script logic should only “know” how to process data while the actual values should derive from separate resources like SQL tables, XML/JSON/CSV files, you name it. Not even dream of hard-coded values!

With separation of data and logic it’s almost a no-brainer to set up the solution for another environment.

Summary

To conclude, there’s definitely more to tell but in the end it’s all about thinking end-to-end and micromanage. Mind the details!

If you miss or think different about something feel free to contribute your ideas by submitting comments.


  • 1

ISESteroids 2.0 is getting off the starting blocks…

Tags :

Category : Tool , Windows PowerShell

… and you should give it a try: ISESteroids 2.0

To tell you the truth, up to yesterday I had some kind of prejudice against Tobias Weltner’s ISESteroids meaning that I considered it a PowerShell ISE Add-On that rather addresses a beginner’s needs.

During the 3rd german ‘PowerShell Community Konferenz 2015’ I changed my opinion. While giving talks, Tobias showcased by the way several times features of the upcoming release. I changed opinions. ISESteroids not only help you to produce better PowerShell solutions, but also it brings you to speed – regardless if your level is beginner, advanced, expert, guru whatever.

For example, imagine you’re challenged to write an advanced function with different parameter sets, mandatory parameters, and some optional parameters. With ISESteroids loaded, you can do things back to front, in other words you start with writing the syntax as you want it to be, such as…

After that you just need to highlight the code, right-click, select the ISESteroids action to create a function, and – voilà – you get a neatly written skeleton for a function that exactly matches the syntax specifications you made.

Another great feature I saw in action was a WPF GUI builder that requires/interacts with Visual Studio.

I could continue listing my memory minutes. But, it still would be just the tip of the iceberg. ISESteroids is packed features you need to discover while working with it. So do I


  • 0

Enums in Windows PowerShell Less Than Version 5.0

Maybe you’ve noticed that the upcoming version of Windows PowerShell, 5.0, will make Enumerators (Enums) very easy to create with the new enum keyword. With this post I share an approach to create enums in PowerShell 4.0 and lower as well.

(If you know what an Enumerator is you can skip this section.) Enums help you to deal with rather small ranges of integer values (each value gets a name) and, even more importantly, they simplify programming robust solutions. Put the case that you have to deal with different environments, for example Dev, Test, Acceptance, and Prod. And let’s say that each environment is represented by an int value (thus, 0 to 3 represents Dev to Prod). What happens if you assign the value 4 by mistake? For PowerShell it’s ok because 4 is a valid int value. Therefore, this error will remain undetected at the scene and – according Murphy – reveal its dark energy in the worst possible moment. You get the idea, I hope. It’s no fun to narrow down such problems. How to prevent such failure? You could mess around with if statements and -lt, -gt, -eq for example. Or you make use of, guess what, an Enum. If you have an Enum type for the afore-mentioned environments, PowerShell will refuse a variable of this type to be assigned any value outside of the scope 0..3 and throws an error at the root cause. Therefore, I like to use Enums ever since PowerShell 1.0.

In Windows PowerShell 4.0 and below, Enums are created as follows:

Now, play with it (that’s how I like to learn stuff, btw):

Now, let’s get dirty…

Btw, did you notice the hint within the error message? PowerShell lists the possible values for you.

Hope this helps


  • 0

Citrix PVS Image Preparation Script for XenApp 7.x Workloads

With this post I share a Powershell script that prepares the master installation of a XenApp 7.x Worker for imaging with Citix Provisioning Services, Prepare-XenApp7.ps1.

Due to fact that Citrix has ported its flagship XenApp to the architecture that was introduced with XenDesktop 5, there’s strictly speaking no need to generalize the PVS vDisk that provides the workload of a XenApp Worker because it doesn’t contain IMA-related stuff anymore. On the other hand there’s still room for some optimization steps before putting a XenApp vDisk into production/standard mode. The script automates the following steps:

  • Investigate the PVS’ Personality.ini in the root of the system drive in order to determine the disk mode that is read-write, read-only, or started from local HD
  • Clear Citrix User Profile Manager’s cache
  • Resync time
  • Update GPO settings
  • Clear network related caches (DNS and ARP)
  • Clear WSUS Client related settings
  • Clear event logs
  • Based on the findings in Step 1, suggest a convenient main action, that is either “Exit” (if we’re in maintenance/private w/ read-write vdisk access), or “Invoke ImagingWizard” (if we started from local HD), or “Invoke XenConvert” (reverse imaging scenario w/ read-only vdisk access)

BTW, the script should work for desktop workloads as well but I haven’t tested it so far.

Hope this helps

Latest version on GitHub: Prepare-XenApp7.ps1