Windows PowerShell

Introduction

Recently I’ve been playing with Windows PowerShell 2.0 again, in the context of my day-to-day activities. One hint should suffice for the reader to get an idea of what’s going on: push-based collections. While I’ll follow up on this subject pretty soon, this precursor post explains one of the things I had to work around.

 

PowerShell: a managed application or not?

Being designed around the concept of managed object pipelines, one may expect powershell.exe to be a managed executable. However, it turns out this isn’t the case completely. If you try to run ildasm.exe on the PowerShell executable (which lives in %windir%\system32\WindowsPowerShell\v1.0 despite the 2.0 version number, due to setup complications), you get the following message:

image

So much for the managed executable theory. What else can be going on to give PowerShell the power of managed objects. Well, it could be hosting the CLR. To check this theory, we can use the dumpbin.exe tool, using the /imports flag, checking for mscoree.dll functions being called. And indeed, we encounter the CorBindToRuntimeEx function that’s been the way to host the CLR prior to .NET 4’s in-process side-by-side introduction (a feature I should blog about as well since I wrote a CLR host for in-process side-by-side testing on my prior team here at Microsoft).

image

One of the parameters passed to CorBindToRuntimeEx is the version of the CLR to be loaded. Geeks can use WinDbg or cdb to set a breakpoint on this function and investigate the version parameter passed to it by the PowerShell code:

image

Notice the old code name of PowerShell still being revealed in the third stack frame (from the top). In order to hit this breakpoint on a machine that has .NET 4 installed, I’ve used the mscoreei.dll module rather than mscoree.dll. The latter has become a super-shim in the System32 folder, while the former one is where the CLR shim really lives (“i” stands for “implementation”). This refactoring has been done to aid in servicing the CLR on different version of Windows, where the operating system “owns” the files in the System32 folder.

Based on this experiment, it’s crystal clear the CLR is hosted by Windows PowerShell, with hardcoded affinity to v2.0.50727. This is in fact a good thing since automatic roll-forward to whatever the latest version of the CLR is on the machine could cause incompatibilities. One can expect future versions of Windows PowerShell to be based on more recent versions of the CLR, once all required testing has been carried out. (And in that case, one will likely use the new “metahost” CLR hosting APIs.)

 

Loading .NET v4 code in PowerShell v2.0

The obvious question with regards to some of the stuff I’ve been working on was whether or not we can run .NET v4 code in Windows PowerShell v2.0? It shouldn’t be a surprise this won’t work as-is, since the v2.0 CLR is loaded by the PowerShell host. Even if the hosting APIs weren’t involved and the managed executable were compiled against .NET v2.0, that version’s CLR would take precedence. This is in fact the case for ISE:

image

Trying to load a v4.0 assembly in Windows PowerShell v2.0 pathetically fails – as expected – with the following message:

image

So, what are the options to get this to work? Let’s have a look.

Warning:  None of those hacks are officially supported. At this point, Windows PowerShell is a CLR 2.0 application, capable of loading and executing code targeting .NET 2.0 through .NET 3.5 SP1 (all of which run on the second major version of the CLR).

 

Option 1 – Hacking the parameter passed to CorBindToRuntimeEx

If we just need an ad-hoc test of Windows PowerShell v2.0 running on CLR v4.0, we can take advantage of WinDbg once more. Simply break on the CorBindToRuntimeEx and replace the v2.0.50727 string in memory by the v4.0 version, i.e. v4.0.30319. The “eu” command used for this purpose stands for “edit memory Unicode”:

image

If we let go the debugger after this tweak, we’ll ultimately get to see Windows PowerShell running seemingly fine, this time on CLR 4.0. One proof is the fact we can load the .NET 4 assembly we tried to load before:

image

Another proof can be found by looking at the DLL list for the PowerShell.exe instance in Process Explorer:

image

No longer we see mscorwks.dll (which is indicative of CLR 2.0 or below), but a clr.dll module appears instead. While this hack works fine for single-shot experiments, we may want to get something more usable for demo and development purposes.

Note:  Another option – not illustrated here – would be to use Detours and intercept the CorBindToRuntimeEx call programmatically, performing the same parameter substitution as the one we’ve shown through the lenses of the debugger. Notice though the use of CorBindToRuntimeEx is deprecated since .NET 4, so this is and stays a bit of a hack either way.

 

Option 2 – Hosting Windows PowerShell yourself

The second option we’ll explore is to host Windows PowerShell ourselves, not by hosting the CLR and mimicking what PowerShell.exe does, but by using the APIs provided for this purpose. In particular, the ConsoleShell class is of use to achieve this. Moreover, besides simply hosting PowerShell in a CLR v4 process, we can also load snap-ins out of the box. But first things first, starting with a .NET 4 Console Application, add a reference to the System.Management.Automation and Microsoft.PowerShell.ConsoleHost assemblies which can be found under %programfiles%\Reference Assemblies\Microsoft\WindowsPowerShell\v1.0:

image

The little bit of code required to get basic hosting to work is shown below:

using System;
using System.Management.Automation.Runspaces;
using Microsoft.PowerShell;

namespace PSHostCLRv4
{
    class Program
    {
        static int Main(string[] args)
        {
            var config = RunspaceConfiguration.Create();
            return ConsoleShell.Start(
config,
"Windows PowerShell - Hosted on CLR v4\nCopyright (C) 2010 Microsoft Corporation. All rights reserved.",
"",
args
); } } }

Using the RunspaceConfiguration object, it’s possible to load snap-ins if desired. Since that would reveal the reason I was doing this experiment, I won’t go into detail on that just yet :-). The tip in the introduction should suffice to get an idea of the experiment I’m referring to. Here’s the output of the above:

image

While this hosting on .NET 4 is all done using legitimate APIs, it’s better to be conservative when it comes to using this in production since PowerShell hasn’t been blessed to be hosted on .NET 4. While compatibility between CLR versions and for the framework assemblies has been a huge priority for the .NET teams (I was there when it happened), everything should be fine. But the slightest bit of pixy dust (e.g. changes in timing for threading, a classic!) could reveal some issue. Till further notice, use this technique only for testing and experimentation.

Enjoy and stay tuned for more PowerShell fun (combined with other technologies)!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Last week, I had the honor to speak at TechEd 2008 South Africa on a variety of topics. In this post I’ll outline all of the resources, including uploads of all my demos, referred to during my presentations. But before I do so, I sincerely want to point out what a great audience I got. Thanks to everyone for attending, asking a lot of both interesting and challenging questions, the kind words, great and honest evaluations and so much more – you were a fabulous audience. Hope to see you all again next year!

DEV 305 – C# 3.0 and LINQ Inside Out

MGT 301 – Next-Generation Manageability – Windows PowerShell and MMC 3.0

DEV 303 – Parallel Extensions to the .NET Framework

DEV 304 – Writing Custom LINQ Providers

Again, thank you very much for attending, have fun and see you around sooner or later!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Introduction

Lately I've been playing quite a bit with DLR technologies, including IronRuby. During some experiments I came to the conclusion that the Kernel.` method isn't implemented yet in the current version. This `backtick` method allows executing OS commands from inside a Ruby program. It's a bit like Process.Start, redirecting the standard output as a string to the Ruby program for further use (actually the Kernel.exec method is precisely implemented like this).

image

One thing I like about the NotImplementedException is the fact it points unambiguously to the source that's missing :-). Indeed, if you browse the IronRuby source, you'll find this:

[RubyMethod("`", RubyMethodAttributes.PrivateInstance)]
[RubyMethod("`", RubyMethodAttributes.PublicSingleton)]
public static MutableString ExecuteCommand(CodeContext/*!*/ context, object self, [NotNull]MutableString/*!*/ command) {
    // TODO:
    throw new NotImplementedException();
}

Actually there are a couple of things here that are worth to discuss besides the current void of the method body:

  • Strings in Ruby are mutable as opposed to strings in the CLR/BCL. Therefore they are wrapped in a MutableString class.
  • The weird /*!*/ notation indicates not-nullness, mirrored after the equivalent uncommented form in Spec# (e.g. MutableString! means a null-nullable string). To work with regular C#, the NotNullAttribute is used.
  • Methods like this one are not invoked by Ruby directly, instead the RubyMethodAttribute declaration carries the metadata that provides the entry-point to the method (as well as other metadata).

Kernel.`

I won't cover the differences between Kernel.`, Kernel.system and Kernel.exec; more information can be found here. The backtick one is our target for the scope of this post:

`cmd` => string

Returns the standard output of running cmd in a subshell. The built-in syntax %x{…} uses this method. Sets $? to the process status.

   `date`                   #=> "Wed Apr  9 08:56:30 CDT 2003\n"
   `ls testdir`.split[1]    #=> "main.rb"
   `echo oops && exit 99`   #=> "oops\n"
   $?.exitstatus            #=> 99

Actually I want to focus on (yet another) powerful feature in PowerShell, namely the ability to create custom hosts. What we want to achieve here is that the backtick syntax (or the equivalent %x syntax) runs the specified command as a PowerShell command (or pipeline of multiple cmdlets), emitting the string output as the `'s methods return value. Notice though this actually downgrades ones of the core principles of PowerShell concerning the use of objects through the pipeline rather than falling back to strings. One could easily think of a more powerful way to expose the results of a PowerShell invocation as PSObjects in Ruby but we'll keep that for later.

Important: This post outlines no more than the capability to hook up PowerShell in IronRuby through Kernel.`. Obviously no promises are made about the way Kernel.` will eventually be implemented in IronRuby as we move forward.

Building a custom PS host

Creating custom PS hosts isn't that hard, depending on how much functionality you want to take over. We'll stick with the basics of console I/O, actually just the O in this. What we want to get done is this:

  1. Build up a runspace containing the command passed to Kernel.` (in addition to some more pipeline commands to produce the right output, see further).
  2. Invoke the built-up pipeline.
  3. Retrieve the string output from the host, concatenate it into one big string and return that one to the caller of Kernel.`.

 

Preparing for PowerShell programming

In order to extend PowerShell, you'll need to add a reference to System.Management.Automation.dll which can be found in the Reference Assemblies folder (click to enlarge):

image

Runspaces

Let's start at the very top by implementing a method called "InvokePS" that sets up the infrastructure to call PowerShell:

public static class RubyToPS
{
    public static string InvokePS(string command)
    {
        RubyPSHost host = new RubyPSHost();

        using (Runspace runspace = RunspaceFactory.CreateRunspace(host))
        {
            runspace.Open();

            using (Pipeline pipeline = runspace.CreatePipeline())
            {
                pipeline.Commands.AddScript(command);
                pipeline.Commands[0].MergeMyResults(PipelineResultTypes.Error, PipelineResultTypes.Output);                 pipeline.Commands.Add("out-default");

                pipeline.Invoke();
            }
        }

        return ((RubyPSHostUserInterface)host.UI).Output;
    }
}

The RubyPSHost class will be shown next, let's focus on the Runspace stuff for now. A runspace serves as the entry-point to the PowerShell engine and encapsulates all the state needed to execute pipelines. Once we've opened the runspace, a pipeline is created to which we add the passed-in command as a script. This allows more than just one cmdlet invocation to be executed (e.g. "gps | where { $_.WorkingSet64 -gt 50MB }"). To send output to the host we append Out-Default to the pipeline:

NAME
    Out-Default

SYNOPSIS
    Send the output to the default formatter and the default output cmdlet. Thi
    s cmdlet has no effect on the formatting or output. It is a placeholder tha
    t lets you write your own Out-Default function or cmdlet.

SYNTAX
    Out-Default [-inputObject <psobject>] [<CommonParameters>]

DETAILED DESCRIPTION
    The Out-Default cmdlet send the output to the default formatter and the def
    ault output cmdlet. This cmdlet has no effect on the formatting or output.
    It is a placeholder that lets you write your own Out-Default function or cm
    dlet.

The MergeMyResults call is used to ensure that error objects produced by the first command are merged into the output (otherwise you'll get an exception instead). Finally the output is retrieved from the host after invoking the pipeline. How this works will be covered in a minute.

To read more about PowerShell runspaces, check out my other posts on the topic.

Deriving from PSHost

Custom PowerShell hosts derive from the abstract PSHost base class. There's quite some stuff that can be done here but we'll stick with the absolute minimum functionality required to reach our goals:

internal class RubyPSHost : PSHost
{
    private Guid _hostId = Guid.NewGuid();
    private RubyPSHostUserInterface _ui = new RubyPSHostUserInterface();

    public override Guid InstanceId
    {
        get { return _hostId; }
    }

    public override string Name
    {
        get { return "RubyPSHost"; }
    }

    public override Version Version
    {
        get { return new Version(1, 0); }
    }

    public override PSHostUserInterface UI
    {
        get { return _ui; }
    }

    public override CultureInfo CurrentCulture
    {
        get { return Thread.CurrentThread.CurrentCulture; }
    }

    public override CultureInfo CurrentUICulture
    {
        get { return Thread.CurrentThread.CurrentUICulture; }
    }

    public override void EnterNestedPrompt()
    {
        throw new NotImplementedException();
    }

    public override void ExitNestedPrompt()
    {
        throw new NotImplementedException();
    }

    public override void NotifyBeginApplication()
    {
        return;
    }

    public override void NotifyEndApplication()
    {
        return;
    }

    public override void SetShouldExit(int exitCode)
    {
        return;
    }
}

More information about all of those methods and properties can be found on MSDN. The most important one to us the the UI property that points at our PSHostUserInterface implementation called RubyPSHostUserInterface.

Implementing PSHostUserInterface

Where the PSHost class provides basic information concerning the metadata of the host (name, version, id)m lifetime of the host (nested prompt, execit commands) and general settings (cultures), the PSHostUserInterface class deals with "dialog-oriented and line-oriented interaction between the cmdlet and the user, such as writing to, prompting for, and reading from user input" (from MSDN). The part we're interested in the writing to part. We won't deal with prompts or user interaction - if one wants to do this, the Kernel.` command is no longer non-interactive (a possible alternative way to implement this would be to spawn PowerShell.exe and just get the shell's output here - the default host would take care of all user interaction if required; the only problem is that prompts would appear in the Kernel.` output as well). Implementation of this class isn't that hard either:

internal class RubyPSHostUserInterface : PSHostUserInterface
{
    private StringBuilder _sb;

    public RubyPSHostUserInterface()
    {
        _sb = new StringBuilder();
    }

    public override void Write(ConsoleColor foregroundColor, ConsoleColor backgroundColor, string value)
    {
        _sb.Append(value);
    }

    public override void Write(string value)
    {
        _sb.Append(value);
    }

    public override void WriteDebugLine(string message)
    {
        _sb.AppendLine("DEBUG: " + message);
    }

    public override void WriteErrorLine(string value)
    {
        _sb.AppendLine("ERROR: " + value);
    }

    public override void WriteLine(string value)
    {
        _sb.AppendLine(value);
    }

    public override void WriteVerboseLine(string message)
    {
        _sb.AppendLine("VERBOSE: " + message);
    }

    public override void WriteWarningLine(string message)
    {
        _sb.AppendLine("WARNING: " + message);
    }

    public override void WriteProgress(long sourceId, ProgressRecord record)
    {
        return;
    }

    public string Output
    {
        get
        {
            return _sb.ToString();
        }
    }

    public override Dictionary<string, PSObject> Prompt(string caption, string message, System.Collections.ObjectModel.Collection<FieldDescription> descriptions)
    {
        throw new NotImplementedException();
    }

    public override int PromptForChoice(string caption, string message, System.Collections.ObjectModel.Collection<ChoiceDescription> choices, int defaultChoice)
    {
        throw new NotImplementedException();
    }

    public override PSCredential PromptForCredential(string caption, string message, string userName, string targetName, PSCredentialTypes allowedCredentialTypes, PSCredentialUIOptions options)
    {
        throw new NotImplementedException();
    }

    public override PSCredential PromptForCredential(string caption, string message, string userName, string targetName)
    {
        throw new NotImplementedException();
    }

    public override PSHostRawUserInterface RawUI
    {
        get { return null; }
    }

    public override string ReadLine()
    {
        throw new NotImplementedException();
    }

    public override System.Security.SecureString ReadLineAsSecureString()
    {
        throw new NotImplementedException();
    }
}

The core of our implementation lies in the fact that all Write* methods emit their data to a StringBuilder instance that aggregates all output sent to the host. This is the data that gets retrieved by our InvokePS method on the last line:

return ((RubyPSHostUserInterface)host.UI).Output;

Notice this isn't the absolute end of host-level extensibility in PowerShell. A PSHostUserInterface class can point at a PSHostRawUserInterface object that controls host window characteristics (such as the size, position and title of the window). Actually it would be interesting to implement this one as well in order to provide an accurate BufferSize that will be used by PowerShell to control the maximum length of individual lines before wrapping to the next line. The reason this would be a good idea is that screen-scraping Ruby programs should be able to ignore different wrapping behavior depending on the hosting command window (which would cause programs to behave differently depending where they run). Ideally there would be no wrapping at all (letting the DLR IronRuby command-line host dealing with wrapping when printing data to the screen). I'll leave this exercise to the reader.

Hooking it up

All of the above has been implemented in a separate strong-named Class Library which I'm just referencing in the IronRuby.Libraries project. This is actually very quick-and-dirty, making IronRuby directly dependent on our assembly and by extension Windows PowerShell. A way around this would be to load the assembly dynamically possibly based on an environment variable. There are lots of possibilities here which we consider just an implementation detail for now. The only thing left to do is to call our InvokePS method which requires some conversions between System.String and MutableString:

[RubyMethod("`", RubyMethodAttributes.PrivateInstance)]
[RubyMethod("`", RubyMethodAttributes.PublicSingleton)]
public static MutableString ExecuteCommand(CodeContext/*!*/ context, object self, [NotNull]MutableString/*!*/ command) { 
    return MutableString.Create(RubyToPS.InvokePS(command.ConvertToString()));
}

That's it! Here's the result:

image

Note: The \r\n insertions in the output for display by Ruby's console cause things to wrap a bit nasty given the default of 80 characters buffer width. I've adjusted to 83 characters to make this render correctly. With some smart "Raw UI host" one could eliminate some issues here - however the internal contents of the string is more important since the app will likely rely on that (otherwise you'd simply run a PowerShell interactive shell, wouldn't you?). Just as one sample, here's the output of the each_line iterator:

image

Does look an awful lot like PowerShell, doesn't it?

Cheers!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

In a reaction to my post on LINQ to MSI yesterday, Hal wrote this:

I don't know enough about the dev side to know if this is a stupid question or not but here goes: Would I be able to take advantage of LINQ to MSI (or LINQ in general from a wider point-of-view) from within PowerShell?  I know someone made an MSI snapin but I seem to recall it being a pretty simple thing.  Having the ability for admins to query and work with MSI packages seems like it could be awfully useful, and the point of not learning yet another SQL variant rings true for everyone, not just developers.  :)

Obviously not a stupid question at all (there are only stupid answers, to start with a cliché for once :-)). Having LINQ capabilities in PowerShell is definitely something I have given a thought - actually it has been one of my fun projects a few months back but there are quite some challenges associated with this. However, Hal's comment made me think about it a bit more, so I mixed in another piece of magic called "Dynamic LINQ". Let's take a look at an experimental journey through "LINQ in PowerShell" integration.

 

There's no such thing as a pipe...

Well, at least a unique definition for it. Pipelines are found in various corners of the computer science landscape but two major implementations stand out:

  • First of all there's the pipe model employed by shells with typical samples in the UNIX world, the DOS shell and obviously Windows PowerShell. This kind of pipe model works in a push-like eager fashion: the source at the left provides data that's pushed through the pipe that acts as a filtering (e.g. PS where) and transformation mechanism (e.g. PS select). The fact data appears at the end of it is the effect of pushing data into it at the front. It's typically eager because sending the chained set of commands that form the pipeline to the command processor triggers execution immediately, in a left-to-right fashion.
  • On the other side, there's a lazy variant of the pipe model used by the LINQ query comprehension monadic model. Data flows in at the left again but it doesn't start to flow till the pipeline participant on the right sucks data out of it. So ultimately a chain of query operators pull data from the source starting all the way from the right. This laziness makes LINQ stand out since no more data fetching work is done than strictly needed (e.g. if you do a Take(5) on a sequence of 1,000 items, no more than 5 items will be fetched).

Two different models of pipelines that prove hard to unify. However, on the other side when thinking about LINQ in PowerShell it would be handy to leverage existing idioms rather than creating a whole new language in the language for querying although that would definitely work too as long as it feels natural enough. Dynamic LINQ provides a middle-ground: the operators (Where, Select, OrderBy, etc) are still implemented as method calls while their arguments are based on a textual expression language.

 

Dynamic LINQ

Scott blogged about Dynamic LINQ a while ago in his Dynamic LINQ (Part 1: Using the LINQ Dynamic Query Library) post. In essence, Dynamic LINQ talks to any IQueryable-capable LINQ implementation and allows you to write things like:

nw.Products.Where("UnitPrice > 100").Select("New(ProductName As Name, UnitPrice As Price)");

The string islands in here are converted into expression trees at runtime and executed against the IQueryable data source. Notice dynamic types will be generated on the fly as well, in the sample above a type with two properties Name and Price created by the projection. This moves the boundary of runtime versus compile time a bit further: the expression strings become expression trees which in turn get translated into the back-end language by the provider being targeted (SQL, LDAP, CAML, WS calls, etc).

What you need to get started with this post, is the following download: C# Dynamic Query Library (included in the \LinqSamples\DynamicQuery directory)

 

Encapsulating a data source: Northwind strikes again

Moving the boundary from compile-time to run-time is a good thing in quite some cases but extremes are rarely the best choice. All the way statically typed won't work here: PowerShell works with an interpreter and lots of runtime supporting features (such as ETS) - a good thing it does. You don't sell compilers to IT Pros. Now, for our LINQ mission, complete dynamic typing wouldn't work that well either: we're going to access some data store which has rich data with specific type information. We better encapsulate this so that our PowerShell scripts can take advantage of it. For example, a SQL database table produces a strongly-typed entity class, which is precisely what LINQ to SQL's sqlmetal tool (or equivalent designer) does.

However, I do agree that for some types of data sources, a more ad hoc access mechanism is more appropriate that for others. Ad hoc here doesn't just point at the query capability (after all we want to realize just that) but also at the burden you have to go through to get access to data. I'd categorize SQL under the ad hoc extremes: SQL has been so easy to access (SQL syntax, SQLCMD or OSQL tool) that it's a pity we'll have to create an entity type upfront to access any table whatsoever. But still, if you want ad hoc data access, there's still regular SQL (which you'll use for dynamism by trading runtime types representing the data). On the other side there are things like AD where the schema changes rarely and the entities could be part of a library that exposes all of the entities that ship with AD. Once that one's loaded you have virtually any (and strongly-typed) ad hoc data capability through LINQ. After all, it depends on the flux of the data source. Requiring new types every time a SQL database schema changes is definitely overhead but as mentioned for things like AD and e.g. MSI (which has fixed tables) that would be less of a stumble block.

Let's go for a LINQ to SQL sample anyway despite all of this philosophical fluff, so create a new Class Library project in VS and add a LINQ to SQL Classes file to it:

image

Drag and drop all of the tables from the Northwind database from the Server Explorer to the designer and compile the assembly. Data access made easy - that's what LINQ's all about!

image

This can actually already be used from PowerShell:

image

Since the context object is just an object, you can create an instance of it like this:

[System.Reflection.Assembly]::LoadFile("C:\temp\LINQthroughPowerShell\Northwind\bin\Debug\Northwind.dll")
$ctx = new-object Northwind.NorthwindDataContext

Just load the DLL and use new-object. As you can see in the screenshot above, everything what we need is available. But...

 

Breaking eagerness

IEnumerables make PowerShell pipes tick (amongst other "streams" of objects provided by participants in the pipeline). That's too eager. Let's show you: in the session above, type $ctx and see what happens:

image

Oops, all data is in the console already. Handy but wasteful. Why does this happen? Tables in LINQ to SQL are of type Table<T> (where T is the entity type) which are IQueryable<T> and thus IEnumerable<T>. PowerShell, eager as it is, enumerates over IEnumerables to get their results (which makes sense in this context). If you turn on logging on the LINQ to SQL data context, you'll see precisely what happens:

image

So, how can we break eagerness? We don't have such a thing like a lazy pipeline, so let's create one by having two markers: a cmdlet that establishes a "lazy context" and one that terminates it. Everything in between flowing through the pipe won't be an IEnumerable but a "captured IEnumerable", in our case more specifically an IQueryable, which we rewrite throughout the pipe by adding LINQ operators to it through Dynamic LINQ. I assume readers of this blog are familiar with cmdlet development; if not, check out my Easy Windows PowerShell cmdlet development and debugging post.

Below is the class that will encapsulate the IQueryable to suppress eager evaluation, creating our object traveling through the lazy context:

public class LinqQuery
{
    private IQueryable _queryable;

    internal LinqQuery(IQueryable queryable)
    {
        _queryable = queryable;
    }

    public string Expression
    {
        get
        {
            return _queryable.Expression.ToString();
        }
    }

    internal IQueryable Query
    {
        get
        {
            return _queryable;
        }
    }
}

and to establish a lazy context, we'll provide a New-Query cmdlet:

[Cmdlet("New", "Query")]
public class NewQueryCmdlet : Cmdlet
{
    [Parameter(Mandatory = true, Position = 0)]
    public IQueryable Input { get; set; }

    protected override void ProcessRecord()
    {
        WriteObject(new LinqQuery(Input));
    }
}

And finally, to end the context, triggering evaluation, we'll have:

[Cmdlet("Execute", "Query")]
public class ExecuteQueryCmdlet : Cmdlet
{

     [Parameter(Mandatory = true, Position = 0)]
    public LinqQuery Input { get; set; }

    protected override void
ProcessRecord()
    {
        WriteObject(Input.Query);
    }
}

This last one is interesting in that it returns the IQueryable, which by means of the eager pipeline triggers execution (since LINQ providers reach out to the server fetching results upon calling GetEnumerator).

 

Dynamic LINQ query operator cmdlets

This barely needs any explanation whatsoever because of the simplicity of the Dynamic LINQ library. We just start by importing the namespace:

using System.Linq.Dynamic;

And start writing our first cmdlet for Where:

[Cmdlet("Where", "LinqObject")]
public class WhereCmdlet : Cmdlet
{
    [Parameter(Mandatory = true, ValueFromPipeline = true)]
    public LinqQuery Input { get; set; }

    [Parameter(Mandatory = true, Position = 0)]
    public string Predicate { get; set; }

    protected override void ProcessRecord()
    {
        WriteObject(new LinqQuery(Input.Query.Where(Predicate)));
    }
}

Notice we taking in a lazy LinqQuery object from the pipeline and emit a new LinqQuery object in ProcessRecord. This makes a LinqQuery object immutable and one can define a LinqQuery object by means of the pipeline for later reuse (e.g. have a query object that establishes a view on data, and then write multiple queries on top of that). The Predicate parameter takes a string that represents the expression language based predicate for the query. Below you can see all of the extension methods brought in scope by Dynamic LINQ:

image

So, we'll create a cmdlet for all of those, which is absolutely straightforward. Actually, this could be done purely declaratively with my Method Invocation Cmdlet mechanism too, but let's not go there.

 

Putting it to the test

Assume all cmdlets have been written (either do it yourself or download the code below). Time to do a sample:

image

It's that easy. And notice the SQL query being sent to the server, precisely what we were looking for, only fetching what we requested. I did split up the query across some lines to show the expression being generated behind the scenes, which is just a normal chain of methods calls captured by a dynamically generated expression tree at runtime. Without this splitting, writing a query is a one-liner:

image

The query syntax is different than PowerShell-like syntax (e.g. > instead of -lt) but that's merely a syntax issue which would be easy to change. And the cmdlet names are pretty long but there are aliases (lwhere, lsort, ltake, lselect for instance).

 

Implicit more intelligent lazy scoping

Actually what we established above is somewhat equivalent to an explicit Dispose call carried out by a using block. In this case, we have a LinqQuery object created by new-query and disposed off by execute-query. The latter one we can make implicit if we assume that the end of a pipeline should trigger evaluation. That's debatable since it doesn't allow to keep a query object across multiple invocations. Depending on personal taste around explicitness, you might like this behavior and provide a "defer-query" opt-out cmdlet. A simple way to do this intelligent auto-evaluation is by using the PSCmdlet baseclass' MyInvocation property:

public abstract class LazyCmdlet : PSCmdlet
{
    [Parameter(Mandatory = true, ValueFromPipeline = true)]
    public LinqQuery Input { get; set; }

    protected abstract LinqQuery Process();

    protected override void  ProcessRecord()
    {
        LinqQuery result = Process();

        if (MyInvocation.PipelinePosition < MyInvocation.PipelineLength)
        {
            WriteObject(result);
        }
        else
        {
            WriteObject(result.Query);
        }
    }
}

Instead of having the Dynamic LINQ cmdlets override ProcessRecord directly, we let them implement Process and depending on the position in the invocation chain, our base class either returns the query object (avoiding eager expansion by the pipeline) or the IQueryable inside it, making it expand and fetch/yield results. Here's the corresponding class diagram:

image

and with some aliases you can now write:

image

 

Download it

If you want to play with this, you can get the code here: LinqThroughPowerShellProvider.cs. It doesn't include the Dynamic LINQ codebase, which you can get from C# Dynamic Query Library (included in the \LinqSamples\DynamicQuery directory).

Build instructions:

  1. Create a new Class Library project.
  2. Add a reference to System.Management.Automation.dll (from %programfiles%\Reference Assemblies).
  3. Add a reference to System.Configuration.Install.dll.
  4. Add LinqThroughPowerShellProvider.cs to it.
  5. Add Dynamic.cs form the Dynamic LINQ library to it.
  6. Build it.

Install instructions:

  1. Open an elevated prompt and go to the bin\Debug build output folder.
  2. Execute installutil -i <name of the dll>

Run instructions:

  1. Open Windows PowerShell.
  2. Execute add-pssnapin LTP
  3. Play around with samples :-). You can use any LINQ provider with IQueryable support (e.g. LINQ to SQL, AD, SharePoint, etc).

Have fun!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Two weeks ago I did a little tour through Europe spreading the word on a couple of our technologies including Windows PowerShell 2.0. In this blog series I'll dive into a few features of Windows PowerShell 2.0. Keep in mind though it's still very early and things might change towards RTW - all samples presented in this series are based on the CTP which is available over here.

 

Introduction

Previously in this series we covered new scripting capabilities with script cmdlets, script internationalization and a few language enhancements in Windows PowerShell 2.0. But writing scripts is just one piece of the puzzle, how to debug those when something isn't quite right? To answer that question, Windows PowerShell 2.0 introduces script debugging capabilities.

 

Set-PsDebug

The Windows PowerShell debugging story embodies a series of features that cooperate with each other. First we have some configuration options using Set-PsDebug. This is the cmdlet you'll use to configure the debugging options of the system. There are a few configuration options:

  • -off: turns script debugging off
  • -trace: specifies a trace level, where 0 is like -off, 1 traces script execution on a line-per-line basis, 2 does the same but also traces variable assignment and function calls
  • -strict: like VB's Option Strict, makes the debugger throw an exception is a variable is used before being assigned to

Below is a run showing some Set-PsDebug options:

image

Notice all debugging output triggered by the trace level set through Set-PsDebug is prefixed with DEBUG. In order to write to the debug output yourself, there's Write-Debug which I'll leave as an exploration for the reader.

 

Working with breakpoints

Where it really gets interesting is the concept of breakpoints which are "points" where execution is "broken". In PowerShell that corresponds to the following:

  • A line (and column) number in a certain script;
  • Calls to a specified function;
  • Invocations of a specified command;
  • Variable access.

Once we have specified where we need to focus on what to do when the breakpoint is hit. When no action is specified, the shell will spawn a sub-shell that has access to the current state of the execution so that variables can be inspected and other actions can be taken during debugging. Alternatively, one can specify a script-block as an action.

Enough theory, let's map those concepts on cmdlets. Breakpoint in PowerShell are called PSBreakpoints, so let get-command be our guide:

image

It obviously all starts with New-PSBreakpoint and all other cmdlets are self-explanatory. Time to show a few uses of breakpoints. First, create a simple script called debug.ps1:

function bar {
   $a = 123
   if ($a -gt 100)
   {
      $a
      foo
   }
}

function foo {
   $a = 321
   Write-Host $a
}

"Welcome to PowerShell 2.0 Script Debugging!"
bar

Invoking it should produce no surprises:

image

First we'll set a breakpoint on a specific line of the script using New-PSBreakpoint -Script debug.ps1 -Line 16 and re-run the script. Notice - with tracing on to show line info of the script executing - we're breaking on the call to bar:

image

Also notice the two additional > signs to the prompt below. This indicates we've entered a nested debugging prompt. Now we need to control the debugger to indicate what we want to do. For that purpose there are a few Step-* cmdlets as shown below:

image

With Step-Into you simple go to the next statement, possibly entering a function call. With Step-Over you do the same, but you "step over" function calls straight to the line below the call. Step-Out is used to exit from a breakpoint and let the script continue to run till the next breakpoint is hit (or till it completes). A quick run:

image

So far we've been stepping through the code line-by-line. Notice the line numbers being shown next the DEBUG: word when tracing is enabled. The second DEBUG: line shows the output of the Step-Into command, showing where we'd end up next (preview of the next line). Now we're inside the foo function call, but you might wonder how we got there and which functions have been called before: enter Get-PsCallstack:

image

From the original prompt (0), we executed debug.ps1, which called into bar and foo subsequently to end up in the nested debugger prompt. While debugging you'll obviously want to investigate the system, for example to see what $a contains, so you can simple print the variable. Finally, we continue to step and exit the nested prompt because the script has stopped:

image

Time for some bookkeeping: let's get rid of this breakpoint. Easy once more, using Remove-PSBreakpoint:

image

So illustrate a few other concepts, we'll set a breakpoint on a function, on a command invocation and on variable access:

image

Re-run the script and watch out. Here's the output - we break four times: two variable $a assignments, one foo call and one call to Write-Host:

image

Notice the use of exit to escape from the nested prompt and to make the script execution continue to the next breakpoint. An alternative would be to use Step-Out. Especially the variable assignment debugger breakpoint option is very attractive because in lots of cases you see state of a variable being changed and you simple want to trace back where changes are happening.

Other stuff you might want to take a look into includes the -Action parameter to New-PSBreakpoint, the ability to clone breakpoints using -Clone, enabling/disabling breakpoints and the HitCount property of breakpoints.

For more information on debugging, simply take a look at get-help about_debug.

Happy debugging!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Two weeks ago I did a little tour through Europe spreading the word on a couple of our technologies including Windows PowerShell 2.0. In this blog series I'll dive into a few features of Windows PowerShell 2.0. Keep in mind though it's still very early and things might change towards RTW - all samples presented in this series are based on the CTP which is available over here.

 

Introduction

After a couple of language-related features, we move into infrastructure in this post. Enter the world of remoting in Windows PowerShell 2.0, at least partially. So what's up? The Universal Code Execution Model, to which the Remoting features belong, is one of the core enhancements of Windows PowerShell 2.0 allowing script to be run either locally or remotely in different modes:

  • 1-to-many "fan-out" - execute script on a (large) number of machines (e.g. web farms)
  • Many-to-1 "fan-in" - delegating administration of a server by hosting PowerShell inside the service
  • 1-on-1 "interactive" - managing a server remotely much like "secure telnet"

In addition to this, other features are part of the Universal Code Execution Model:

  • Restricted Runspaces - the idea of having a runspace that's restricted in what it can do (e.g. concerning the operations and the language used)
  • Mobile Object Model - the plumbing that makes it possible to have objects travel around the network (i.e. serialization and deserialization infrastructure)
  • Eventing - adding the concept of events to PowerShell manageability, allowing actions to be taken when certain events occur
  • Background jobs - running commands (cmdlets, scripts) in the background asynchronously

For the remote execution (remote meaning cross runspace boundaries), Windows PowerShell uses the WS-MGMT protocol which is enabled by the WINRM service:

image

The nice thing about using web services is its firewall friendliness. However, in order to enable PowerShell to work with it, one needs to run a script first: $pshome\Configure-WSMAN.ps1. It opens the required ports, checks the service is installed and executes a set of winrm configuration commands that enable endpoints.

 

Background jobs

We'll stick with background jobs for now. There are 6 cmdlets to manage background jobs, known by the PSJob noun in PowerShell 2.0 speak:

image

What's better to start with than Start-PSJob? Here's the syntax:

PS C:\temp> start-psjob -?

NAME
    Start-PSJob

SYNOPSIS
    Creates and starts a Windows PowerShell background job (PsJob) on a local or remote computer.

SYNTAX
    Start-PSJob [-Command] <String> [[-ComputerName] <String[>] [-Credential <PSCredential>] [-Port <Int32>] [-UseSSL] [-ShellName <String>] [-ThrottleLimit <Int32>] [-InputObject <PSObject>] [-Name <String>] [<CommonParameters>]

    Start-PSJob [-Command] <String> [[-Runspace] <RemoteRunspaceInfo[>] [-ThrottleLimit <Int32>] [-InputObject <PSObject>] [-Name <String>] [<CommonParameters>]

Notice the synopsis: on a local or remote computer. This is where remoting enters the picture, with the concept of a remote runspace. We won't go there though, let's stick with local execution and start a command:

start-psjob "start-sleep 30"

This will show the following:

image

Normally, "start-sleep 30" would block the interactive console for 30 seconds (feel free to try). However, now we have sent off the command to the background, in a session with Id 1. The way this works roughly is by having runspaces and communication channels between them to send commands and receive data. The fact data is available is indicated by the HasMoreData property on the job. Without going in too much details, running commands remotely follows the same idea and results can be streamed back from the server to the client so that you can retrieve results piece-by-piece.

Back to our sample now. Of course we can stop a background job by using stop-psjob:

image

What the sample above shows as well is waiting for a PSJob to complete when you need to get the results at that particular point in time, e.g. after having done some more foreground work. Notice the wait-psjob above is blocked for the remainder of the 30 seconds while the background job is completing. Ultimately it returns like this:

image

 

Where's my data, dude?

(Added a comma to disambiguate with some other Microsoft product:-)). Having background jobs is one thing, but getting results back is another thing. For Parallel Extensions folks, it like drawing the line between a Task and a Future<T>. So dude, where's my cup of T? The answer lies in the difference between wait-psjob and receive-psjob. While wait-psjob simply waits for a job to finish, receive-psjob receives data from it. What's really happening is that the foreground runspace talks to the background session to get data back which travels across boundaries (whether or not that boundary is on the local machine or across the network), cf. the Mobile Object Model.

image

An interesting thing to look at is the get-member output for these objects, more specifically the NoteProperty members on it:

image

This is where you see the objects are really deserialized across boundaries, which is one of the tasks of the Mobile Object Model. For example, the PSIPHostEntry shows the origin of the object which is particularly useful when working with remoting. In this context notice that a background job can spawn other background jobs by itself, meaning objects might be aggregated from various sources before they travel your way.

Another thing to realize is that data is streaming in. Assume you're asking for a bunch of results to come in from a remote server. These results are typically emitted by the pipeline object-by-object (unless there's a cmdlet that returns an array of objects or so, which can look to the pipeline - depending on how the result is returned - as one big object) so it makes sense to get the current results, wait for new ones to be produced and get subsequent results. Essentially the pseudo-algorithm is:

while ($job.HasMoreData)
{
    receive-psjob $job
    # do some other stuff
}

Here's a concrete sample:

image

The first time I called receive-psjob only the "get-process | select -f 5" pipeline would have yielded results, so I receive that data while the HasMoreData flag is still set to true. About 30 seconds later, I call receive-psjob $bar again. By then the results of "get-service | select -f 5" have come in too, and HasMoreData indicates there's nothing more to come (the State indicates the background job has completed).

Enjoy your dream-PSJob!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Two weeks ago I did a little tour through Europe spreading the word on a couple of our technologies including Windows PowerShell 2.0. In this blog series I'll dive into a few features of Windows PowerShell 2.0. Keep in mind though it's still very early and things might change towards RTW - all samples presented in this series are based on the CTP which is available over here.

 

Introduction

This time we'll take a brief look at a few language enhancements in Windows PowerShell 2.0. There are three such enhancements that deserve a little elaboration at the time of writing:

  • Splat - 'splatting' of a hashtable as input to a cmdlet invocation
  • Split - splitting strings
  • Join - the reserve of split

 

Splat

Splatting allows the entries of a hash-table to be used in the invocation of a cmdlet - more specifically, keys become named parameters and values become input to those parameters. Here's a sample:

$procs = @{name="notepad","iexplore"}
get-process @procs

And the result looks like this:

image

Of course multiple parameters can be specified at once (that's the whole point of the hashtable anyhow):

$gm = @{memberType="ScriptProperty","Property";name="[a-d]*"}
get-process @gm

image

In other words, invocation parameterization information can now be kept and passed around as data.

 

Split and join

Split and join are fairly trivial in fact. These are the equivalents of System.String's split and join operations but now exposed as language-integrated operators.

"bart","john" -join ","
"bart,john" -split ","

image

Simple but oh so handy :-). Have fun!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Two weeks ago I did a little tour through Europe spreading the word on a couple of our technologies including Windows PowerShell 2.0. In this blog series I'll dive into a few features of Windows PowerShell 2.0. Keep in mind though it's still very early and things might change towards RTW - all samples presented in this series are based on the CTP which is available over here.

 

Introduction

Imagine you're working for a company that operators in different countries or regions with different languages. Or you're creating a product that will be used by customers around the globe. Hard-coding messages in one language isn't likely going to be the way forward in such a case. Unfortunately with the first release of Windows PowerShell copy-paste localization of scripts was all too common. In this post we'll take a look at the Windows PowerShell 2.0 Script Internationalization feature.

 

String tables

In order to allow for localization, string tables are used quite often. The idea of a string table is to have key-value pairs that contain the (to-be) localized strings in order to separate the logic from the real string messages. Windows PowerShell 2.0 has this new cmdlet called ConvertFrom-StringData which is described as: "Converts a string containing one or more "name=value" pairs to a hash table (associative array)." If you read a little further in the get-help output you'll see the following:

The ConvertFrom-StringData cmdlet is considered to be a safe cmdlet that can be used in the DATA section of a script or function. When used in a DATA section, the contents of the string must conform to the rules for a DATA section. For details, see about_data_section.

Data sections are new to Windows PowerShell 2.0 and deserve a post on their own. For the purpose of this post, it suffices to say that a data section is a section used in script that can only contain data-operations and therefore it only supports a subset of the PowerShell language.

Let's use ConvertFrom-StringData on its own for now:

image

In here I'm using a so-called "here-string" that spans multiple lines, each line containing a key = value pair.

 

Localizable scripts

Time to put the pieces together and create a localizable script:

Data msgTable
{
ConvertFrom-StringData @'
    helloWorld = Hello, World :-).
    errorMsg = Something went horribly wrong :-(.
'@
}

Write-Host $msgTable.helloWorld
Throw $msgTable.errorMsg

Here's the result:

image

In the fragment above, notice the use of the Data keyword to denote a data section. In the remainder of the script, $msgTable is used as the variable to denote the hash table created by the ConvertFrom-StringData invocation in the data section.

 

Localized string tables

We already achieved some decoupling between the code and the messages, simply by putting the messages in a separate table. Now we have to blend in the actual culture of the system, which is exposed as $UICulture now:

image

We don't need to use this variable directly though. Using the new Import-LocalizedData cmdlet we can make PowerShell search for the right string table by investigating the directory structure. The idea is to have .psd1 files (a new extension) that contain localized string tables in subdirectories that denote the culture specified by language code and region code:

C:\temp\PS2\I18N\demo.ps1
C:\temp\PS2\I18N\nl-BE\demo.psd1
C:\temp\PS2\I18N\fr-FR\demo.psd1

Let's create the nl-BE\demo.psd1 file:

ConvertFrom-StringData @'
    helloWorld = Hallo, Wereld!
    errorMsg = Oeps! Dit ging vreselijk fout.
'@

Just copy the contents of the data section to a separate .psd1 file and translate it. Such files are "data files" (hence the d in the name) and substitute the contents of a data section. This doesn't happen magically of course, we need to call Import-LocalizedData in our script:

Data msgTable
{
ConvertFrom-StringData @'
    helloWorld = Hello, World :-).
    errorMsg = Something went horribly wrong :-(.
'@
}

Import-LocalizedData -bindingVariable msgTable

Write-Host $msgTable.helloWorld
Throw $msgTable.errorMsg

Import-LocalizedData extracts the $UICulture and tries to find the right .psd1 file. When found, the contents of the file are assigned to the binding variable which points at a data section.

Now when we run on an nl-BE machine, we'll see the following:

image

Enjoy!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Two weeks ago I did a little tour through Europe spreading the word on a couple of our technologies including Windows PowerShell 2.0. In this blog series I'll dive into a few features of Windows PowerShell 2.0. Keep in mind though it's still very early and things might change towards RTW - all samples presented in this series are based on the CTP which is available over here.

 

Introduction

In this first post we'll take a look at script cmdlets. Previously, in v1.0, the creation of cmdlets was an exclusive right for developers using any managed language (typically VB.NET or C#). I've been blogging about this quite a bit in the past all the way back to May 2006:

To work around this limitation, lots of IT Pros have been writing PowerShell scripts that take the naming pattern of cmdlets but the invocation syntax of those is completely different than real cmdlets. For example, there's no built-in notion of mandatory parameters to scripts unless you write your own validation. Similarly, things such as -whatif and -confirm are not supported but these scripts.

Starting with PowerShell 2.0, the creation of cmdlets is now possible using script as well. In this post, I'll port my file hasher cmdlet to a script cmdlet.

 

The basics

Creating a script cmdlet starts by creating a script file, e.g. get-greeting.ps1. Below is the skeleton of a typical script cmdlet:

Cmdlet Verb-Noun
{
   Param(...)
   Begin
   {
   }
   Process
   {
   }
   End
   {
   }
}

The minimalistic script cmdlet would simply consist of a Process section, like this:

Cmdlet Get-Greeting
{
   Process
   {
      "Hello PowerShell 2.0!"
   }
}

In order to execute, save the file (e.g. get-greeting.ps1) and load it using . .\get-greeting.ps1. Now the get-greeting cmdlet is in scope and can be executed:

image

If the cmdlet is executed as part of a pipeline, which means (possibly) multiple records that are flowing through the pipeline have to be processed, the Process block will be executed for each of those. However, the Begin and End blocks will be triggered only once. Before we can go there, let's take a look at parameterization.

 

Parameterization

Parameterization is maybe the most powerful thing about script cmdlets. It all happens in the Param section. Let's extend our greeting cmdlet with a parameter:

Cmdlet Get-Greeting
{
   Param([string]$name)
   Process
   {
      "Hello " + $name + "!"
   }
}

Perform the same steps to load the cmdlet and execute it, first without arguments, then with an argument:

image

The first invocation is not really what we had in mind. The parameter needs to be mandatory instead. In script cmdlets, this is easy to do, simply by adding an attribute to the parameter:

Cmdlet Get-Greeting
{
   Param([Mandatory][string]$name)
   Process
   {
      "Hello " + $name + "!"
   }
}

Now, PowerShell will enforce this declaration and require the parameter to be supplied:

image

Here you see how the PowerShell engine takes over from the script author. Beyond simple mandatory parameters, on can specify validation attributes as well, such as AllowNull, AllowEmptyString, AllowEmptyCollection, ValidateNotNull, ValidateNotNullOrEmpty, ValidateRange, ValidateLength, ValidatePattern, ValidateSet, ValidateCount, ValidateScript. The latter is interesting in that it is not available to managed code cmdlets at the time being - it allows a script function to be specified to carry out validation of the parameter's value (e.g. a script that validates ZIP codes or SSN numbers, that can be reused across multiple script cmdlets).

 

The pipeline

Let's make our cmdlet play together with the pipeline now. We're already emitting data to the pipeline, simply by our "Hello" ... expression that produces a string. However, we'd like to grab data from the pipeline too. This can be done by binding a parameter to the pipeline:

Cmdlet Get-Greeting
{
   Param([ValueFromPipeline][Mandatory][string]$name)
   Process
   {
      "Hello " + $name + "!"
   }
}

image

Here the strings "Bart" and "John" are grabbed from the pipeline to be bound to the $name parameter. To show that Begin and End are only processed once, change the cmdlet as follows:

Cmdlet Get-Greeting
{
   Param([ValueFromPipeline][Mandatory][string]$name)
   Begin
   {
      Write-Host "People can come in through the pipeline"
   }
   Process
   {
      "Hello " + $name + "!"
   }
   End
   {
      Write-Host "Goodbye!"
   }
}

and the result is:

image

Typically Begin and End are used to allocate and free shared resources for reuse during record processing.

 

Interacting with the pipeline processor

There's still more goodness. Using the $cmdlet variable inside the script cmdlet, one can extend the capabilities even more. To see what this can do, create a simple script cmdlet:

Cmdlet Get-Cmdlet
{
   Process
   {
      $cmdlet | get-member
   }
}

This is the result:

image

We won't be able to take a look at each of those, but let's play with a couple of those: ShouldProcess and WriteVerbose.

Cmdlet Get-Greeting -SupportsShouldProcess
{
   Param([ValueFromPipeline][Mandatory][string]$name)
   Begin
   {
      #Write-Host "People can come in through the pipeline"
   }
   Process
   {
      if ($cmdlet.ShouldProcess("Say hello", $name))
      {
         $cmdlet.WriteVerbose("Preparing to say hello to " + $name)
         "Hello " + $name + "!"
         $cmdlet.WriteVerbose("Said hello to " + $name)
      }
   }
   End
   {
      #Write-Host "Goodbye!"
   }
}

Notice the addition of -SupportsShouldProcess in the Cmdlet declaration. This tells the engine our cmdlet is capable of supporting -whatif and -confirm switches. Inside the implementation we add an if-statement that invokes ShouldProcess specifying the action description and the target ($name). The result is this:

image

Essentially, -whatif answers that ShouldProcess call with false, skipping the real invocation but still printing the actions and targets the operation would have triggered. When using -confirm, the user is prompted each time (unless [Yes|No] to All is answered obviously) a ShouldProcess call is made.

When using the -verbose switch, the WriteVerbose calls are emitted to the console as well:

image

 

Porting the File Hasher cmdlet

Enough introductory information, let's do something real. Here's the script for my old file hasher cmdlet ported as a script cmdlet:

Cmdlet Get-Hash
{
   Param
   (
      [Mandatory][ValidateSet("SHA1","MD5")][string]$algo,
      [Mandatory][ValueFromPipelineByPropertyName][string]$FullName
   )
   Begin
   {
      $hasher = [System.Security.Cryptography.HashAlgorithm]::Create($algo)
   }
   Process
   {
      $fs = new-object System.IO.FileStream($FullName, [System.IO.FileMode]::Open)
      $bytes = $hasher.ComputeHash($fs)
      $fs.Close()

      $sb = new-object System.Text.StringBuilder
      foreach ($b in $bytes) {
         $sb.Append($b.ToString("x2")) | out-null
      }

      $sb.ToString()
   }
}

Pretty simple, isn't it? A few implementation highlights:

  • I have two parameters, comma-separated in the Param(...) section.
  • The first parameter should either be MD5 or SHA1 (case-insensitive), which I'm validating using ValidateSet. Anything but those two will fail execution of the cmdlet.
  • The second parameter is taken from the pipeline by property name. Notice FullName is a property on file objects, so this allows to pipe the output of get-childitem (dir) in a file system folder to the get-hash cmdlet.
  • Creation of the hasher algorithm is straight-forward but is done in the Begin section to allow reuse across multiple processed records.
  • The core of the implementation is simple: it opens the file as specified in the $FullName parameter, feeds the stream into the hasher and turns the bytes into their string representation. Notice the use of out-null to suppress any output from the $sb.Append call to bubble up to the pipeline, only the $sb.ToString() result is reported.

Here's the result:

image

Hashes are calculated for all *.cs files. I didn't extend the sample to print the file name (would be simply to do) or to report it as part of the output (wrapping a file name and the hash result in an object, which is harder to do) but if you go back to my original file hasher cmdlet post, you'll see there's another option using the Extended Type System.

Enough for now. As you saw in this post, script cmdlets unlock an enormous potential to extend PowerShell with first-class citizen cmdlets simply by leveraging your scripting knowledge in PowerShell. Together with some other features such as script internationalization (coming up in this series) and packages and modules (not in the current CTP) this is just the tip of the iceberg of PS 2.0 Production Scripting.

Happy script-cmdlet-ing!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Back in manageability land. On TechEd EMEA Developers 2007 I delivered a talk on "Next-generation manageability: Windows PowerShell and MMC 3.0" covering the concept of layering graphical management tools (in this case MMC 3.0) on top of Windows PowerShell cmdlets (and providers). In this post, I'll cover this principle by means of a sample.

 

Introduction

It should be clear by now that Windows PowerShell is at the core of the next-generation manageability platform for Windows. First-class objects, ranging from .NET over COM to everything the Extended Type System can deal with (XML, ADSI, etc), together with scripting support allow people to automate complicated management tasks (combined with the v2.0 features on remoting and eventing this will only get better). Part of this vision is to layer management UIs on top of Windows PowerShell, which opens the door to broader discoverability: explore functionality in the UI, manage it over there and learn how the task would be done through the PowerShell CLI (command-line interface) directly, possibly wrapping it in a script for reuse in automation scenarios. On the development side this is also very appealing due to the fact the UI is a thin layer on top of the underlying cmdlet-based implementation which allows for better testing.

To lay the foundation for this post, please make sure to read the following tutorials:

We'll combine the two in a solution to create a sample layered management sample.

 

Step 0 - Solution plumbing

While thinking about this post I was wondering what to use as the running sample. Task managers layered on get-process are boring, similar for a Service Manager snap-in on top of get-service. Creating providers is too much to address in one post (my sample on TechEd created a provider to talk to a SQL database, allowing to cd to a table and dir it, exposing all of this to an MMC snap-in that hosted a Windows Forms DataGrid control). So I came up with the idea of writing a Tiny IIS Manager targeting IIS 7. This post assumes you've installed IIS 7 locally on your Windows Vista or Windows Server 2008 machine.

Before you start, make sure to run Visual Studio 2008 as administrator since we're going to launch Windows PowerShell loading a snap-in that requires administrative privileges.

Create a new solution called TinyIisManager:

image

Add two class library projects, one called TinyIisPS and another called TinyIisMMC. To configure the projects, follow my tutorials mentioned above:

Step 0.0 - Add the required references to the projects

This is how the end-result should look like:

image

Step 0.1 - Tweak the Debugger settings under the project properties

Again just the results (click to enlarge):

image image

Note: Make sure the paths to MMC and PS are set correctly on your machine. These settings won't work yet since we're missing the debug.* files (see below).

Step 0.2 - Add empty place-holders for the snap-ins (both PS and MMC)

Almost trivial to do if you've read the cookbook posts. Rename Class1.cs for the PS library into IisMgr.cs and add the following code:

image

Rename Class1.cs for the MMC library into IisMgr.cs and add the following code:

image

Step 0.3 - Build and register

Build both projects, open a Visual Studio 2008 Command Prompt running as administrator and cd into the bin\Debug folders for both projects to run installutil.exe against the created assemblies:

image

Step 0.4 - Creating debugging files

Open Windows PowerShell, add the registered snap-in and export the console file to debug.psc1 under the TinyIisPS project root folder:

image

Open MMC, add the registered snap-in (CTRL-M) and save the file as debug.msc under the TinyIisMMC project root folder:

image image

Don't worry about the empty node in the Selected snap-ins display - our constructor didn't set the node text (yet). Don't forget to close both MMC and Windows PowerShell.

Step 0.5 - Validate debugging

You should now be able to right-click any of the two projects and choose "Debug, Start new instance" to start a debugging session. Validate this is right: the MMC snap-in should load and the PS snap-in should be available:

image image

You're now all set to start the coding.

 

Step 1 - Building the Windows PowerShell layer

Let's start at the bottom of the design: the Windows PowerShell layer that will do all the real work. To keep things simple, we'll just provide a few cmdlets although bigger systems would benefit from providers too (so that you can navigate through a (optionally hierarchical) data store, e.g. to cd into virtual folder in an IIS website). We'll write just three cmdlets:

  • get-site - retrieves a list of sites on the local IIS 7 web server
  • start-site - starts a site
  • stop-site - stops a site

Feel free to envision other cmdlets of course :-). The API we'll use to talk to IIS is the new Microsoft.Web.Administration of IIS 7 which can be found under %windir%\system32\inetsrv, so let's import it (make sure you're under the right project: TinyIisPS):

image

Import the namespace Microsoft.Web.Administration to IisMgr.cs and add the following cmdlet classes (for simplicity I stick them in the same file - not recommended for manageability of your source tree :-)):

[Cmdlet(VerbsCommon.Get, "site")]
public class GetSiteCmdlet : Cmdlet
{
    protected override void ProcessRecord()
    {
        using (ServerManager mgr = new ServerManager())
        {
            WriteObject(mgr.Sites, true);
        }           
    }
}

public abstract class ManageSiteCmdlet : Cmdlet
{
    protected ServerManager _manager;

    [Parameter(Mandatory = true, Position = 1, ValueFromPipelineByPropertyName = true)]
    public string Name { get; set; }

    protected override void BeginProcessing()
    {
        _manager = new ServerManager();
    }

    protected override void EndProcessing()
    {
        if (_manager != null)
            _manager.Dispose();
    }

    protected override void StopProcessing()
    {
        if (_manager != null)
            _manager.Dispose();
    }
}

[Cmdlet(VerbsLifecycle.Start, "site", SupportsShouldProcess = true)]
public class StartSiteCmdlet : ManageSiteCmdlet
{
    protected override void ProcessRecord()
    {
        Site site = _manager.Sites[ Name ];

        if (site == null)
        {
            WriteError(new ErrorRecord(new InvalidOperationException("Site not found."), "404", ErrorCategory.ObjectNotFound, null));
        }
        else if (site.State == ObjectState.Started || site.State == ObjectState.Starting)
        {
            WriteWarning("Can't start site.");
        }
        else if (ShouldProcess(site.Name, "Start"))
        {
            site.Start();
        }
    }
}

[Cmdlet(VerbsLifecycle.Stop, "site", SupportsShouldProcess = true)]
public class StopSiteCmdlet : ManageSiteCmdlet
{
    protected override void ProcessRecord()
    {
        Site site = _manager.Sites[ Name ];

        if (site == null)
        {
            WriteError(new ErrorRecord(new InvalidOperationException("Site not found."), "404", ErrorCategory.ObjectNotFound, null));
        }
        else if (site.State == ObjectState.Stopped || site.State == ObjectState.Stopping)
        {
            WriteWarning("Can't stop site.");
        }
        else if (ShouldProcess(site.Name, "Stop"))
        {
            site.Stop();
        }
    }
}

Just 80 lines of true power. Time for a quick check of the functionality. Run the TinyIisPS project under the debugger and play around a little with the cmdlets:

image

If you see messages like the one below, check you're running Visual Studio 2008 as an administrator which will fork the child Windows PowerShell debuggee process as administrator too:

image 

 

Step 2 - Building the graphical MMC layer on top of the cmdlets

Time to bump up our TinyIisMMC project. The first thing to do is to add a reference to the System.Management.Automation.dll assembly (the one used in the PS project to write the cmdlets) since we need to access the Runspace functionality in order to host Windows PowerShell in the context of our MMC snap-in:

image

Also add references to System.Windows.Forms (needed for some display) and Microsoft.Web.Administration (see instructions above - similar as in the PowerShell layer). We'll need this in order to use the objects returned by the PowerShell get-site cmdlet. Time to start coding again. Basically an MMC snap-in consists of:

  • The SnapIn class which acts as the root of the hierarchy; it adds nodes to its tree;
  • A tree of ScopeNode instances which get displayed in the tree-view;
  • Actions associated with the nodes;
  • View descriptions to render a node in the central pane.

We'll keep things simple and provide only the tree with a few actions and an HTML-based view on the item (which just loads the website - after tab-based browsing we now have tree-based browsing :-)). Let's start by the SnapIn class:

[SnapInSettings("{36D66A51-A9A4-4981-B338-B68D15068F5C}", DisplayName = "Tiny IIS Manager")]
public class IisMgr : SnapIn
{
    private Runspace _runspace;

    public IisMgr()
    {
        InitializeRunspace();

        this.RootNode = new SitesNode();
    }

    internal Runspace Runspace { get { return _runspace; } }

    private void InitializeRunspace()
    {
        RunspaceConfiguration config = RunspaceConfiguration.Create();

        PSSnapInException warning;
        config.AddPSSnapIn("IisMgr", out warning);

        // NOTE: needs appropriate error handling

        _runspace = RunspaceFactory.CreateRunspace(config);
        _runspace.Open();
    }

    protected override void OnShutdown(AsyncStatus status)
    {
        if (_runspace != null)
            _runspace.Dispose();
    }
}

In here, the core bridging with PowerShell takes place: we create a runspace (the space in which we run commands etc) based on some configuration object that has loaded the IisMgr PowerShell snap-in created in the previous paragraph. We also expose the runspace through an internal property so that we can reference it from the other classes used by the snap-in, such as SitesNode:

class SitesNode : ScopeNode
{
    public SitesNode()
    {
        this.DisplayName = "Web sites";
        this.EnabledStandardVerbs = StandardVerbs.Refresh;

        LoadSites();
    }

    protected override void OnRefresh(AsyncStatus status)
    {
        LoadSites();
        status.Complete("Loaded websites", true);
    }

    private void LoadSites()
    {
        this.Children.Clear();
        this.Children.AddRange(
            (from site in ((IisMgr)this.SnapIn).Runspace.CreatePipeline("get-site").Invoke()
             select new SiteNode((Site)site.BaseObject)).ToArray()
        );
    }
}

The constructor is easy: we add a display name to the node (no blankness anymore) and enable the "standard verb" Refresh (which will appear in the action pane). To handle Refresh, we overload Refresh. Notice MMC 3.0 support asynchronous loading (not to block the management console when an action is taking place) but let's not go there for now. In LoadSites the real stuff happens: we grab the Runspace through the internal property defined on the SnapIn, create a pipeline that simply invokes get-site and finally invoke it by calling Invoke. This produces a collection of PSObject objects, which are wrappers (used for the Extended Type System) around the original object (in our case a Microsoft.Web.Administration.Site object). Using a simple LINQ query we grab the results and wrap them in SiteNode objects (see below) which are added as the node's children.

class SiteNode : ScopeNode
{
    private Site _site;
    private Microsoft.ManagementConsole.Action _startAction;
    private Microsoft.ManagementConsole.Action _stopAction;
    private HtmlViewDescription _view;

    public SiteNode(Site site)
    {
        _site = site;

        this.DisplayName = site.Name;
        this.EnabledStandardVerbs = StandardVerbs.Properties | StandardVerbs.Refresh;

        _startAction = new Microsoft.ManagementConsole.Action() { Tag = "start", DisplayName = "Start" };
        this.ActionsPaneItems.Add(_startAction);
        _stopAction = new Microsoft.ManagementConsole.Action() { Tag = "stop", DisplayName = "Stop" };
        this.ActionsPaneItems.Add(_stopAction);

        Refresh();

        Microsoft.Web.Administration.Binding binding = _site.Bindings[0];
        _view = new HtmlViewDescription(new Uri(String.Format("{0}://{1}:{2}", binding.Protocol, binding.Host == "" ? "localhost" : binding.Host, binding.EndPoint.Port))) { DisplayName = "View site", Tag = "html" };

        this.ViewDescriptions.Add(_view);
    }

    protected override void OnAction(Microsoft.ManagementConsole.Action action, AsyncStatus status)
    {
        switch (action.Tag.ToString())
        {
            case "start":
                ((IisMgr)this.SnapIn).Runspace.CreatePipeline("start-site -name \"" + _site.Name + "\"").Invoke();
                break;
            case "stop":
                ((IisMgr)this.SnapIn).Runspace.CreatePipeline("stop-site -name \"" + _site.Name + "\"").Invoke();
                break;
        }

        Refresh();
    }

    protected override void OnAddPropertyPages(PropertyPageCollection propertyPageCollection)
    {
        propertyPageCollection.Add(new PropertyPage() {
            Title = "Website",
            Control = new PropertyGrid() {
                SelectedObject = _site,
                Dock = DockStyle.Fill
            }
        });
    }

    protected override void OnRefresh(AsyncStatus status)
    {
        Refresh();
    }

    private void Refresh()
    {
        _startAction.Enabled = _site.State == ObjectState.Stopped;
        _stopAction.Enabled = _site.State == ObjectState.Started;
    }
}

That's basically it. In the constructor we define a couple of custom actions for the "Stop" and "Start" actions. We enable the verbs for Properties and Refresh and provide some basic implementation for those (for properties we rely on the PropertyGrid control although in reality you'd want a much more customized view on the data that hides the real underlying object model). We also add an HTML view description that points at the URL of the website itself (normally you'd use different types of view descriptions in order to show items under that particular node, e.g. virtual folders for the website, or a bunch of 'control panel style' configuration options, as in the real inetmgr.exe). Again, the logic to invoke cmdlets is very similar, we just add some parameterization:

        switch (action.Tag.ToString())
        {
            case "start":
                ((IisMgr)this.SnapIn).Runspace.CreatePipeline("start-site -name \"" + _site.Name + "\"").Invoke();
                break;
            case "stop":
                ((IisMgr)this.SnapIn).Runspace.CreatePipeline("stop-site -name \"" + _site.Name + "\"").Invoke();
                break;
        }

and this time no data is returned (strictly speaking that's not true since errors will flow back through the runspace - feel free to play around with this).

 

Step 3 - The result

Time to admire the result. Launch the MMC snap-in project under the debugger:

image  image

The full code is available over here. Usual disclaimers apply - this is nothing more than sample code...

Enjoy!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

More Posts Next page »