February 2008 - Posts

Back in manageability land. On TechEd EMEA Developers 2007 I delivered a talk on "Next-generation manageability: Windows PowerShell and MMC 3.0" covering the concept of layering graphical management tools (in this case MMC 3.0) on top of Windows PowerShell cmdlets (and providers). In this post, I'll cover this principle by means of a sample.

 

Introduction

It should be clear by now that Windows PowerShell is at the core of the next-generation manageability platform for Windows. First-class objects, ranging from .NET over COM to everything the Extended Type System can deal with (XML, ADSI, etc), together with scripting support allow people to automate complicated management tasks (combined with the v2.0 features on remoting and eventing this will only get better). Part of this vision is to layer management UIs on top of Windows PowerShell, which opens the door to broader discoverability: explore functionality in the UI, manage it over there and learn how the task would be done through the PowerShell CLI (command-line interface) directly, possibly wrapping it in a script for reuse in automation scenarios. On the development side this is also very appealing due to the fact the UI is a thin layer on top of the underlying cmdlet-based implementation which allows for better testing.

To lay the foundation for this post, please make sure to read the following tutorials:

We'll combine the two in a solution to create a sample layered management sample.

 

Step 0 - Solution plumbing

While thinking about this post I was wondering what to use as the running sample. Task managers layered on get-process are boring, similar for a Service Manager snap-in on top of get-service. Creating providers is too much to address in one post (my sample on TechEd created a provider to talk to a SQL database, allowing to cd to a table and dir it, exposing all of this to an MMC snap-in that hosted a Windows Forms DataGrid control). So I came up with the idea of writing a Tiny IIS Manager targeting IIS 7. This post assumes you've installed IIS 7 locally on your Windows Vista or Windows Server 2008 machine.

Before you start, make sure to run Visual Studio 2008 as administrator since we're going to launch Windows PowerShell loading a snap-in that requires administrative privileges.

Create a new solution called TinyIisManager:

image

Add two class library projects, one called TinyIisPS and another called TinyIisMMC. To configure the projects, follow my tutorials mentioned above:

Step 0.0 - Add the required references to the projects

This is how the end-result should look like:

image

Step 0.1 - Tweak the Debugger settings under the project properties

Again just the results (click to enlarge):

image image

Note: Make sure the paths to MMC and PS are set correctly on your machine. These settings won't work yet since we're missing the debug.* files (see below).

Step 0.2 - Add empty place-holders for the snap-ins (both PS and MMC)

Almost trivial to do if you've read the cookbook posts. Rename Class1.cs for the PS library into IisMgr.cs and add the following code:

image

Rename Class1.cs for the MMC library into IisMgr.cs and add the following code:

image

Step 0.3 - Build and register

Build both projects, open a Visual Studio 2008 Command Prompt running as administrator and cd into the bin\Debug folders for both projects to run installutil.exe against the created assemblies:

image

Step 0.4 - Creating debugging files

Open Windows PowerShell, add the registered snap-in and export the console file to debug.psc1 under the TinyIisPS project root folder:

image

Open MMC, add the registered snap-in (CTRL-M) and save the file as debug.msc under the TinyIisMMC project root folder:

image image

Don't worry about the empty node in the Selected snap-ins display - our constructor didn't set the node text (yet). Don't forget to close both MMC and Windows PowerShell.

Step 0.5 - Validate debugging

You should now be able to right-click any of the two projects and choose "Debug, Start new instance" to start a debugging session. Validate this is right: the MMC snap-in should load and the PS snap-in should be available:

image image

You're now all set to start the coding.

 

Step 1 - Building the Windows PowerShell layer

Let's start at the bottom of the design: the Windows PowerShell layer that will do all the real work. To keep things simple, we'll just provide a few cmdlets although bigger systems would benefit from providers too (so that you can navigate through a (optionally hierarchical) data store, e.g. to cd into virtual folder in an IIS website). We'll write just three cmdlets:

  • get-site - retrieves a list of sites on the local IIS 7 web server
  • start-site - starts a site
  • stop-site - stops a site

Feel free to envision other cmdlets of course :-). The API we'll use to talk to IIS is the new Microsoft.Web.Administration of IIS 7 which can be found under %windir%\system32\inetsrv, so let's import it (make sure you're under the right project: TinyIisPS):

image

Import the namespace Microsoft.Web.Administration to IisMgr.cs and add the following cmdlet classes (for simplicity I stick them in the same file - not recommended for manageability of your source tree :-)):

[Cmdlet(VerbsCommon.Get, "site")]
public class GetSiteCmdlet : Cmdlet
{
    protected override void ProcessRecord()
    {
        using (ServerManager mgr = new ServerManager())
        {
            WriteObject(mgr.Sites, true);
        }           
    }
}

public abstract class ManageSiteCmdlet : Cmdlet
{
    protected ServerManager _manager;

    [Parameter(Mandatory = true, Position = 1, ValueFromPipelineByPropertyName = true)]
    public string Name { get; set; }

    protected override void BeginProcessing()
    {
        _manager = new ServerManager();
    }

    protected override void EndProcessing()
    {
        if (_manager != null)
            _manager.Dispose();
    }

    protected override void StopProcessing()
    {
        if (_manager != null)
            _manager.Dispose();
    }
}

[Cmdlet(VerbsLifecycle.Start, "site", SupportsShouldProcess = true)]
public class StartSiteCmdlet : ManageSiteCmdlet
{
    protected override void ProcessRecord()
    {
        Site site = _manager.Sites[ Name ];

        if (site == null)
        {
            WriteError(new ErrorRecord(new InvalidOperationException("Site not found."), "404", ErrorCategory.ObjectNotFound, null));
        }
        else if (site.State == ObjectState.Started || site.State == ObjectState.Starting)
        {
            WriteWarning("Can't start site.");
        }
        else if (ShouldProcess(site.Name, "Start"))
        {
            site.Start();
        }
    }
}

[Cmdlet(VerbsLifecycle.Stop, "site", SupportsShouldProcess = true)]
public class StopSiteCmdlet : ManageSiteCmdlet
{
    protected override void ProcessRecord()
    {
        Site site = _manager.Sites[ Name ];

        if (site == null)
        {
            WriteError(new ErrorRecord(new InvalidOperationException("Site not found."), "404", ErrorCategory.ObjectNotFound, null));
        }
        else if (site.State == ObjectState.Stopped || site.State == ObjectState.Stopping)
        {
            WriteWarning("Can't stop site.");
        }
        else if (ShouldProcess(site.Name, "Stop"))
        {
            site.Stop();
        }
    }
}

Just 80 lines of true power. Time for a quick check of the functionality. Run the TinyIisPS project under the debugger and play around a little with the cmdlets:

image

If you see messages like the one below, check you're running Visual Studio 2008 as an administrator which will fork the child Windows PowerShell debuggee process as administrator too:

image 

 

Step 2 - Building the graphical MMC layer on top of the cmdlets

Time to bump up our TinyIisMMC project. The first thing to do is to add a reference to the System.Management.Automation.dll assembly (the one used in the PS project to write the cmdlets) since we need to access the Runspace functionality in order to host Windows PowerShell in the context of our MMC snap-in:

image

Also add references to System.Windows.Forms (needed for some display) and Microsoft.Web.Administration (see instructions above - similar as in the PowerShell layer). We'll need this in order to use the objects returned by the PowerShell get-site cmdlet. Time to start coding again. Basically an MMC snap-in consists of:

  • The SnapIn class which acts as the root of the hierarchy; it adds nodes to its tree;
  • A tree of ScopeNode instances which get displayed in the tree-view;
  • Actions associated with the nodes;
  • View descriptions to render a node in the central pane.

We'll keep things simple and provide only the tree with a few actions and an HTML-based view on the item (which just loads the website - after tab-based browsing we now have tree-based browsing :-)). Let's start by the SnapIn class:

[SnapInSettings("{36D66A51-A9A4-4981-B338-B68D15068F5C}", DisplayName = "Tiny IIS Manager")]
public class IisMgr : SnapIn
{
    private Runspace _runspace;

    public IisMgr()
    {
        InitializeRunspace();

        this.RootNode = new SitesNode();
    }

    internal Runspace Runspace { get { return _runspace; } }

    private void InitializeRunspace()
    {
        RunspaceConfiguration config = RunspaceConfiguration.Create();

        PSSnapInException warning;
        config.AddPSSnapIn("IisMgr", out warning);

        // NOTE: needs appropriate error handling

        _runspace = RunspaceFactory.CreateRunspace(config);
        _runspace.Open();
    }

    protected override void OnShutdown(AsyncStatus status)
    {
        if (_runspace != null)
            _runspace.Dispose();
    }
}

In here, the core bridging with PowerShell takes place: we create a runspace (the space in which we run commands etc) based on some configuration object that has loaded the IisMgr PowerShell snap-in created in the previous paragraph. We also expose the runspace through an internal property so that we can reference it from the other classes used by the snap-in, such as SitesNode:

class SitesNode : ScopeNode
{
    public SitesNode()
    {
        this.DisplayName = "Web sites";
        this.EnabledStandardVerbs = StandardVerbs.Refresh;

        LoadSites();
    }

    protected override void OnRefresh(AsyncStatus status)
    {
        LoadSites();
        status.Complete("Loaded websites", true);
    }

    private void LoadSites()
    {
        this.Children.Clear();
        this.Children.AddRange(
            (from site in ((IisMgr)this.SnapIn).Runspace.CreatePipeline("get-site").Invoke()
             select new SiteNode((Site)site.BaseObject)).ToArray()
        );
    }
}

The constructor is easy: we add a display name to the node (no blankness anymore) and enable the "standard verb" Refresh (which will appear in the action pane). To handle Refresh, we overload Refresh. Notice MMC 3.0 support asynchronous loading (not to block the management console when an action is taking place) but let's not go there for now. In LoadSites the real stuff happens: we grab the Runspace through the internal property defined on the SnapIn, create a pipeline that simply invokes get-site and finally invoke it by calling Invoke. This produces a collection of PSObject objects, which are wrappers (used for the Extended Type System) around the original object (in our case a Microsoft.Web.Administration.Site object). Using a simple LINQ query we grab the results and wrap them in SiteNode objects (see below) which are added as the node's children.

class SiteNode : ScopeNode
{
    private Site _site;
    private Microsoft.ManagementConsole.Action _startAction;
    private Microsoft.ManagementConsole.Action _stopAction;
    private HtmlViewDescription _view;

    public SiteNode(Site site)
    {
        _site = site;

        this.DisplayName = site.Name;
        this.EnabledStandardVerbs = StandardVerbs.Properties | StandardVerbs.Refresh;

        _startAction = new Microsoft.ManagementConsole.Action() { Tag = "start", DisplayName = "Start" };
        this.ActionsPaneItems.Add(_startAction);
        _stopAction = new Microsoft.ManagementConsole.Action() { Tag = "stop", DisplayName = "Stop" };
        this.ActionsPaneItems.Add(_stopAction);

        Refresh();

        Microsoft.Web.Administration.Binding binding = _site.Bindings[0];
        _view = new HtmlViewDescription(new Uri(String.Format("{0}://{1}:{2}", binding.Protocol, binding.Host == "" ? "localhost" : binding.Host, binding.EndPoint.Port))) { DisplayName = "View site", Tag = "html" };

        this.ViewDescriptions.Add(_view);
    }

    protected override void OnAction(Microsoft.ManagementConsole.Action action, AsyncStatus status)
    {
        switch (action.Tag.ToString())
        {
            case "start":
                ((IisMgr)this.SnapIn).Runspace.CreatePipeline("start-site -name \"" + _site.Name + "\"").Invoke();
                break;
            case "stop":
                ((IisMgr)this.SnapIn).Runspace.CreatePipeline("stop-site -name \"" + _site.Name + "\"").Invoke();
                break;
        }

        Refresh();
    }

    protected override void OnAddPropertyPages(PropertyPageCollection propertyPageCollection)
    {
        propertyPageCollection.Add(new PropertyPage() {
            Title = "Website",
            Control = new PropertyGrid() {
                SelectedObject = _site,
                Dock = DockStyle.Fill
            }
        });
    }

    protected override void OnRefresh(AsyncStatus status)
    {
        Refresh();
    }

    private void Refresh()
    {
        _startAction.Enabled = _site.State == ObjectState.Stopped;
        _stopAction.Enabled = _site.State == ObjectState.Started;
    }
}

That's basically it. In the constructor we define a couple of custom actions for the "Stop" and "Start" actions. We enable the verbs for Properties and Refresh and provide some basic implementation for those (for properties we rely on the PropertyGrid control although in reality you'd want a much more customized view on the data that hides the real underlying object model). We also add an HTML view description that points at the URL of the website itself (normally you'd use different types of view descriptions in order to show items under that particular node, e.g. virtual folders for the website, or a bunch of 'control panel style' configuration options, as in the real inetmgr.exe). Again, the logic to invoke cmdlets is very similar, we just add some parameterization:

        switch (action.Tag.ToString())
        {
            case "start":
                ((IisMgr)this.SnapIn).Runspace.CreatePipeline("start-site -name \"" + _site.Name + "\"").Invoke();
                break;
            case "stop":
                ((IisMgr)this.SnapIn).Runspace.CreatePipeline("stop-site -name \"" + _site.Name + "\"").Invoke();
                break;
        }

and this time no data is returned (strictly speaking that's not true since errors will flow back through the runspace - feel free to play around with this).

 

Step 3 - The result

Time to admire the result. Launch the MMC snap-in project under the debugger:

image  image

The full code is available over here. Usual disclaimers apply - this is nothing more than sample code...

Enjoy!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

My recent series of "cookbook" posts has been very well-received and coincidentally I got mail today about MMC 3.0 snap-ins. I wrote on the subject a while (read: Vista RC1 timeframe) ago, so in this post I'll revisit the topic in cookbook style in order to provide the foundation for my next post on MMC 3.0 and PowerShell layering, a topic I talked about on TechEd EMEA 2007 but that never made it to a blog post.

For the record, my previous cookbook posts include the following:

For regular readers of my blog, step 1 and 2 will sound repetitive but in true PowerShell-style I'd say "verbosity is your friend" (J. Snover), especially in cookbooks.

 

Step 1 - Create a Class Library project

Create a new (C#) class library project, e.g. called MyMmcSnapIn:

image

 

Step 2 - Import references

In Solution Explorer, right-click the project node and select Add Reference... On the Browse tab, navigate to the %programfiles%\Reference Assemblies folder and locate the Microsoft\mmc\v3.0 subfolder:

image

Note: Reference Assemblies are installed by the Windows SDK - a must-have for each platform developer.

In there, select the Microsoft.ManagementConsole.dll file:

image

Click OK. Next, go back to the Add Reference dialog and select System.Configuration.Install from the .NET tab:

image

 

Step 3 - Create your snap-in

Snap-ins derive from the SnapIn class. Go ahead and rename Class1.cs to MySnapIn.cs. Next, inherit the class from SnapIn which will require to import the Microsoft.ManagementConsole namespace:

image

For sake of this post, let's keep things simple and just implement the bare minimum, i.e. setting the RootNode property to some node:

image

I'm using C# 3.0 syntax to do this, as shown below:

image

resulting in this piece of code:

image

A real implementation (see next post) would likely create a node class that derives from ScopeNode to create a custom node that consumes data (e.g. from PowerShell, see next post).

 

Step 4 - Adding metadata

Our snap-in needs to carry some metadata using the SnapInSettings custom attribute. This one needs a GUID so go to Tools, Create GUID to create one:

image

Click Copy and Exit which will put a GUID on the clipboard:

image

Now add the SnapInSettings custom attribute to your class:

image

and specify DisplayName and Description (other properties are not really required in this case):

image

 

Step 5 - The installer

In order to register the snap-in on the machine we need to add an installer class. This is as easy as creating a class deriving from SnapInInstaller:

image

Here's the resulting complete code:

image

 

Step 6 - Compile and register

Time to compile the project. Next, open up a Visual Studio 2008 Command Prompt running as Administrator and cd into the bin\Debug folder of your project. Now run installutil on the created assembly:

image

To verify the installation was successful, you can take a look in the registry under HKLM\Software\Microsoft\MMC\SnapIns and look for a key called FX:myguid where myguid is the one specified on the SnapInSettingsAttribute. It should point at your newly created assembly:

image

 

Step 7 - Create an MMC console for debugging

Time to test our snap-in. Go to Start, Run and specify mmc.exe. In the MMC console go to File, Add/Remove Snap-In (CTRL-M). You should see the registered MMC snap-in in there:

image

Notice the name and description. Select it and click Add. Finally click OK. The result should look like:

image

Quite minimalistic, I agree, but we're alive and kicking! Finally choose File, Save and save the console configuration to a file called Debug.msc under your project's folder:

image

Finally, close the MMC console (otherwise the loaded snap-in dll would remain locked, blocking subsequent builds).

 

Step 8 - Setting up the debugger

Back in Visual Studio, go to the project properties. On the tab Debug enter the path to mmc.exe (in the system32 folder) and specify the relative path (starting from bin\Debug) to Debug.msc created in the previous step for the command-line arguments:

image

Set a breakpoint in the code:

image

and press F5. You'll see the breakpoint getting hit:

image

 

Congratulations - your MMC debugging dinner is ready to be served!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

A small post this time. While playing around with LINQ queries lately I noticed one minor missing piece that's merely a convenience thing but anyhow I thought to share it with the world: a ForEach operator. Such a "sequence operator" (to use an old-fashioned word, remember LINQ to Objects used to be called the Standard Query Operators, explaining the abbreviation used in my LINQSQO project at http://www.codeplex.com/LINQSQO) would allow us to write a query an iterate over it directly; look at it as a postfix variant of the foreach keyword if you want.

Here's how it looks like:

static class MoreEnumerable
{
   public static void ForEach<T>(this IEnumerable<T> source, Action<T> action)
   {
      if (source == null)
         throw new ArgumentNullException("source");

       if (action == null)
         throw new ArgumentNullException("action"); 

      foreach (T item in source)
         action(item);
   }
}

I won't elaborate on possible combinations with the Parallel FX extensions library and will leave that to the reader. Anyway, here's how your brand new home-brew operator would be used:

(from p in products where p.UnitPrice > 123 select new { Name = p.ProductName, Price = p.UnitPrice }).ForEach(p => {
   Console.WriteLine(p);
});

which is similar to List<T>'s ForEach<T> method. Notice you'll have full IntelliSense inside the lambda body - the type of p is inferred through the generic parameter T which is the anonymous (projection) type in the sample above.

Have fun!

Update: Apparently people read my posts as late as I'm posting them :-) which is of course well appreciated. I've posted a few personal insights on the pros and cons of this pattern in this post's comments section. Actually the original goal of the post was just to show some "more extensions" one could envision (have a set of other "functional style operators" coming up) but I like the idea of turning it into discussion mode :-). All feedback is welcome!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Introduction

Today I received the following mail from one of my blog readers:

Hi Bart,

The task scheduler in Vista and Windows Server 2008 has improved dramatically.

Unfortunately, there are no classes in the .NET Framework that allow us VB.NET/C# developers to leverage its power.

Many .NET applications require some form of scheduling or alerting, and instead of trying to roll our own with timers and stuff, over and over again, it seems to me that it would be much nicer to use the stable and powerful foundation that the OS offers for this.

Therefore, I thought it would be a great idea for a future article on your blog, about how to access and use the Task Scheduler 2.0 from .NET, with possibly an easy-to-use .NET wrapper class?

It seems that a lot of developers don't realize or think about what's sitting there right under the hood, 'cause I haven't seen any blog posts about this new Task Scheduler and its many features from a .NET developer's perspective yet. So here's my thought... :)

Best regards,

************

Obviously I completely agree with the statement on leveraging the power of the OS foundations whenever possible rather than reinventing the wheel once over again. Task Scheduler 2.0 is a great sample of such rich functionality offered by the OS and especially now we're shipping Windows Server 2008 this becomes even more important for server applications. Nevertheless, for desktop uses the Task Scheduler provides a tremendous amount of functionality as well and Windows Vista is using its own dogfood as you can see when you execute schtasks from the command-line (indicated a few well-known tasks in red):

image

In this post I'll cover how to use this API in a fairly easy way from managed code though COM interop, and I'll explain some of the richness the platform can give you.

 

Importing the library

I assume you've already created a Console Application in C# (though all of this would work in e.g. VB.NET as well). The Task Scheduler 2.0 API lives in a file called Taskschd.dll under the System32 folder on your system. In order to reference it from your .NET project, simply go to Solution Explorer, right-click the project node and choose Add Reference:

image

This will create the COM interop assembly as shown in Solution Explorer:

image

 

A simple demo task

Time to write our first task, or better to register it. Essentially tasks in Task Scheduler 2.0 are represented as XML fragments as you can see from schtasks:

image

I'd encourage readers to take a closer look at schtasks and the information one can obtain through it about the wide variety of tasks registered on the system. The API we'll be talking to allows us to manage these tasks (create new ones for example) though code, which provides an object model to create the metadata that represents a task as displayed above under the format of XML.

 

Step 1 - Establish a connection with the service

In order to talk to the Task Scheduler service we need to create a proxy object to it and connect to the service, either on the local machine or remote machine. We'll stick with the local machine for the scope of this post. Start by writing the following piece of code:

image

This will require to import the namespace TaskScheduler as revealed by the 'SmartTag' in Visual Studio. Notice I'm using the C# 3.0 local variable type inference keyword "var" here, but one could well write:

TaskSchedulerClass scheduler = new TaskSchedulerClass();

but not (using the TaskScheduler interface provided by the interop assembly)

TaskScheduler scheduler = new TaskSchedulerClass();

(little quiz - why?). Anyhow, we still need to connect to it using the Connect method:

image

We can simply supply four null arguments indicating we want to use the token of the user logged on currently (tip: run using administrative privileges to manage the service effectively). Needless to say, you can use other values for those parameters to connect to a particular machine (first parameter) and to specify a particular user (parameters 2-4 specify user name, password and domain) but we won't go there in this post.

 

Step 2 - Create a new task

The scheduler class provides a factory approach to create new tasks using the NewTask method. It takes one parameter that's reserved for future use (a typical COM API phenomenon) and should be set to 0 for the time being. Once the task has been created, we'll set some properties on it; the most typical ones living under RegistrationInfo and Settings (others will be covered further on):

image

Notice the amount of settings available to tweak the task, e.g. to control behavior with respect to the current power state of the machine, idle time, etc. For our purposes, the RegistrationInfo settings are enough as shown above.

 

Step 3 - Triggers

When to run the task? Enter triggers. There are a bunch of different trigger types available as revealed when calling Triggers.Create:

image

Most of these are self-explanatory (in case of doubt more information can be found on MSDN). The more interesting part is how to create a trigger in managed code. The crux lies in the fact you need to cast the result of the Create call to the right interface, such as ITimeTrigger for the TASK_TRIGGER_TIME type. Let's show a sample:

image

Other similar interfaces for other types of triggers can be found in the TaskScheduler namespace. Time triggers are pretty simple to understand so let's stick with those. In the sample above, we add an identifier to the trigger (tasks can have more than one trigger by the way) as well as some specific settings for this particular trigger. Besides of this we set the start and end time for the trigger; the settings in the sample specify a point in the past and the future so our current time falls nicely in between, triggering the demo task right away once we run it. If you want more powerful triggering, you can take a look at the Repetition property or use tasks such as 'daily' or 'monthly day-of-week (DOW)' or ...

Notice the strange (ISO 8601) date/time format specified on MSDN as: YYYY-MM-DDTHH:MM:SS(+-)HH:MM. In here, the first part is self-explanatory; the part after the +- is used to specify a time zone since tasks are stored based on UTC time. Tip: a string formatter will prove useful to generate this format.

 

Step 4 - Actions

After when comes what. Again there's some choice amongst the different types of actions to be taken:

image 

The EXEC task is one of the most common ones though. The common pattern is the same again: Create, cast, configure. Here's an example of a mail-task but this will require some server configuration in order to work:

image

Here's another one in the category EXEC:

image

Feel free to choose either of those, I'll go for yet another one that displays a message (by now the pattern should be captured I guess):

image

 

Step 5 - Task registration

We've created all the data needed to hook up the task. Last step is to hook it really up by calling RegisterTaskDefinition in some folder. Tasks are logically grouped in folders which you can managed through the ITaskFolder interface. One can obtain a specific folder using the GetFolder call on the scheduler service object. For demo purposes (and because of lack of inspiration tonight :-)) we'll drop the task in the root folder:

image

Again there's a bunch of flexibility available here but simplicity rules for blogging, so the stuff above is pretty much the easiest one can get. Basically we create (or update if it already exists) a named task "Demo" with the credentials of the logged-on used that can only be run when an interactive user is logged on to the machine and with no custom ACL to protect the task (which could be set using an SDDL descriptor).

 

Step 6 - Running it

To run the task we could add a line of code (though you could use schtasks /Run too or obivously rely on your (complex) triggers you've put in place). Since the API is not only about creating tasks, this shows nicely how to control tasks. Here's the whole program with the run line at the bottom:

image

Run it and you should see the following dialog coming out of the blue:

image

Victory at last :-).

 

Step 7 - Geeks only

Geeks can check where the message comes from e.g. using task manager (or a more advanced tool like Process Explorer):

image

Also, you can take a look at the task metadata using schtasks.exe:

image

And finally, if you want to delete the experiment, use schtasks.exe with the /Delete flag:

schtasks /delete /tn Demo

(or use the API to do so obviously :-)).

 

Happy multi-tasking!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

It's great to see Scott pulled the trigger to tell the world about our .NET 3.5 Client Product Roadmap. Lately our setup team - which I'm officially part of - has been working in overdrive to put the pieces for the "Improved .NET Framework Setup for Client Applications" together and we're all looking forward to our first beta release. Once it's available I'll post more technical details outlining how it will help to ease your deployment, how it works and how you can customize and brand your client application's deployment experience.

Stay tuned!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

I'm a firm believer of the "innovation through integration" theme. As a fan of MSBuild and PowerShell I wondered what it would take to bring the two worlds closer together. This post outlines the result of a short but 'powerful' experiment. In order to read this post, I strongly recommend to check out my recent post named The custom MSBuild task cookbook to learn about writing and debugging custom MSBuild tasks.

 

Hosting PowerShell

In order to run PowerShell in a customized environment (as opposed to the default shell that comes with the technology) one needs to work with runspaces. Essentially a runspace allows to host the PowerShell engine and interact with it through pipelines. We'll only cover very basic communication in this post. If one wants to feed back data from PowerShell to the MSBuild output for instance, a PSHost implementation would be required but that goes far beyond the scope of this post.

I've posted about runspaces earlier in my A first introduction to Windows PowerShell runspaces post about one year ago. You might want to check out that post for more information on hosting PowerShell.

 

Introducing PSBuild

Far from original, I admit, but let's call our baby PSBuild. In order to implement it, create a new class library project (C#) and add references to the following assemblies:

image

Including the MSBuild assemblies has been covered in the The custom MSBuild task cookbook post; for more information on the System.Management.Automation assembly, see my Easy Windows PowerShell cmdlet development and debugging post (step 2).

 

Implementing the task

First on our to-do list is implementing the custom MSBuild task in the C# code file. It's barely 45 lines:

using System;
using System.Management.Automation;
using System.Management.Automation.Runspaces;
using System.Text;
using Microsoft.Build.Framework;
using Microsoft.Build.Utilities;

namespace PSBuild
{
    public class InvokeScript : Task
    {
        [Required]
        public ITaskItem Script { get; set; }

        [Required]
        public string Function { get; set; }
        public ITaskItem[] Parameters { get; set; }

        public override bool Execute()
        {
            RunspaceConfiguration runspaceConfig = RunspaceConfiguration.Create();

            using (Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfig))
            {
                runspace.Open();

                StringBuilder commandLine = new StringBuilder();
                commandLine.Append(Function + " ");

                foreach (ITaskItem parameter in Parameters)
                {
                    commandLine.AppendFormat("\"{0}\" ", parameter.ItemSpec);
                }

                using (RunspaceInvoke scriptInvoker = new RunspaceInvoke(runspace))
                {
                    scriptInvoker.Invoke(Script.ItemSpec);
                    scriptInvoker.Invoke(commandLine.ToString());
                }
            }

            return true;
        }
    }
}

Usual disclaimers apply - this code is far from ideal and is just meant to illustrate the concept. Talking about a concept... Let's discuss:

  • We require two parameters: Script containing a PowerShell script block and Function pointing to the function to be invoked. The Parameters for invocation are optional since a function may have no parameters obviously.
  • Notice the type of the Script and Parameters properties. By using ITaskItem we can integrate nicely with MSBuild as we'll see further.
  • The Execute method does all the work. Essentially we create a Runspace and invoke two commands: first is the Script definition itself, second is the invocation of the script based on the Function and Parameters values.
  • Error handling was omitted from the code above - a production quality implementation needs to catch errors and return false in case of an error. Also, some logging (Log property on Task) would be welcome, e.g. to print the command-line that's being invoked (tip: Log.LogCommandLine).

 

Testing the task

Check out my The custom MSBuild task cookbook post for instructions on testing custom MSBuild tasks. I'll just show a sample MSBuild file below that invokes a script:

image

The UsingTask imports our task library built in the previous step. To define the script, we simply define a MyScript tag under a PropertyGroup element. In here we define a function called "ProcessList" that takes in two arguments. I've spread it across two lines to show that local variables (and in extension to that - try it yourself :-) - more advanced scripting techniques) simply work. Finally, we invoke our InvokeScript task somewhere, in this case in the Debug target (again, see The custom MSBuild task cookbook for more info) but you could imagine it to be part of your core build definition. The InvokeScript task references the MyScript through the property reference syntax $(...) of MSBuild; the Function is a simple string and in Parameters we put a semi-colon separated list of parameters which will be assigned $args[0] ... $args[n] in the invoked PowerShell script.

One could imagine the parameterization of the InvokeScript task to be much more complete and flexible (e.g. one could drop the Function attribute and simply execute some script) but that's just a matter of implementation. Also a way to feed back results isn't too difficult (RunspaceInvoke::Invoke returns a collection of PSObjects).

Here's what it does:

image

Notice that one can use any MSBuild variable in the parameterization which gives us a tremendous amount of power. For example, one could write a script that pre-processes all files in @(Compile), and leverage all of the PowerShell and .NET Framework power to do so. I leave it to the reader to experiment with the possibilities.

 

Happy PSBuilding!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

While I'm in the mood of writing up those cookbook posts:

let's do another one. Custom actions are a powerful way to extend MSI-based installers with custom code. Maybe you've run into these before when you created for example a managed Windows Service and hooked it up in the setup project (essentially installutil-able components can be added to an installer by means of a custom action). In this post we'll write our own (dummy) custom action and show how to debug it nicely, something that seems to be a barrier for quite some developers to consider writing a custom action.

 

Step 1 - Create a class library project

Once more, Class Library is your friend:

image

 

Step 2 - Add references

Choose Add References from the context menu on the project node in Solution Explorer. In there, select System.Configuration.Install:

image

 

Step 3 - Plumbing the wires

Custom actions in managed code are wrapped in Installer-subclasses, so derive your class from Installer:

image

and import the System.Configuration.Install namespace. In order for the installer class to be picked up at runtime, you need to attribute it with RunInstaller(true):

image 

This is the result:

image

 

Step 4 - Add functionality

In order to make the custom action do something, you need to override some methods of the base class. I've indicated the most common ones:

image

These correspond to the actions taken by MSI. Let's just override Install and Commit and play a little with state that's passed around (let's not go in depth, more information is available in MSDN):

image

 

Step 5 - Make it debuggable

Our goal is to attach a debugger but custom actions are launched somewhere in the middle of some MSI process. How can we allow for easy debugging? Here's a way to do it: instrument your code with some wait mechanisms to allow for attaching the debugger. A nice way is to use MessageBox. To do this, you'll need to add System.Windows.Forms to the references of the project (as in step 2) and import the namespace:

image

By wrapping these in DEBUG #if's we make sure the code doesn't make it to Release builds, which is good.

 

Step 6 - The setup project

Time to build the setup project. Add a new project to the solution and choose for a Setup Project:

image

The project will open in File System view. Go to the Application Folder, right-click in the right-hand side pane and choose Add Project Output:

image

Select Primary Output for the MyCustomAction project created above:

image

This will add the DLL file to the installation folder:

image

Now it's time to add the Custom Action to the installer (in technical terms to the to-be-built MSI database). Make sure the setup project is selected in Solution Explorer and select Custom Actions Editor from the toolbar:

image

Adding the actions is simple:

image

and select the Primary output from MyCustomAction from above:

image

Do the same for the Commit node.

image

 

Step 7 - Build and debug

That's it. Time for a test run. First, build the MyCustomAction project. Next, right-click the setup project node and choose Build. This is not done when building the solution (since it takes quite some time and you likely do it only sporadically in a bigger solution):

image

Next, right-click the project again and choose Install:

image

Here we go. Click your way through the installer and wait. On Vista you'll need to give UAC consent. After a while you'll see:

image

Don't click OK yet. Switch back to Visual Studio and choose Debug, Attach to Process:

image

You won't find the Attach debugger here dialog in the list at first sight, that's because the custom action is running under the context of the Installer Service in an msiexec.exe instance running under a different (service account) user. Mark 'Show processes from all users' to see it:

image

and click Attach. On Vista you might see the following when you didn't start Visual Studio elevated:

image

Elevation is needed because you're about to debug something with more privileges and rights (running under SYSTEM after all). Choose Restart under different credentials and accept the UAC prompt. Visual Studio will come back in the same project but you'll have to repeat the previous steps to attach the debugger. Notice the user name is now displayed in the dialog:

image

Set breakpoints on the instructions right below the #if DEBUG section:

image

and click OK on the dialog:

image

You'll see the breakpoint being hit:

image

Woohoo! Feel free to step though the (one) line(s) of the custom action and finally hit F5. Now the commit dialog appears and when dismissing it, we'll end up on the next breakpoint:

image

notice the state from the install phase was recovered:

image

 

Enjoy!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

A few years ago I wrote about building custom MSBuild tasks. I wanted to bring the topic back in the spotlight in order to prepare for a follow-up post. Since my previous post on PowerShell cmdlet development (Easy Windows PowerShell cmdlet development and debugging) has been very well-received, I decided to create a similar cookbook for those of you who're interested in building and debugging your own custom MSBuild tasks. The focus of this post is primarily on seamless development and debugger rather than on task functionality.

 

Step 1 - Create a Class Library project

As usual for these kind of extensions to an existing system (e.g. PowerShell, MMC, MSBuild, provider-based technologies, etc) we start by creating a class library project:

image

 

Step 2 - Import references

In order to create MSBuild tasks we need to reference a few libraries. In Solution Explorer, right click your project and select Add References. Now select Microsoft.Build.Utilities.v3.5 and Microsoft.Build.Framework from the list, both from the .NET Framework 3.5 band (note: I won't use 3.5 specific functionality in this post, so you could use the 2.0 assemblies as well, but as a general recommendation don't mix different versions):

image

Solution Explorer should look like this (notice I've removed a few other references I don't need but obviously this depends on your goals for the custom task):

image

 

Step 3 - Implement the task skeleton

Implementing custom MSBuild tasks isn't very difficult - all you need to do is implement the task interface ITask. However, if you try to do that you'll see there are few members to be implemented that just add boilerplate code. Instead one can derive from the abstract base class Task:

image 

This will require the Microsoft.Build.Utilities namespace to be imported as shown above. I revealed the 'functionality' of this task in the class name in the meantime, I hope it's not too shocking :-). After importing the namespace, implement the base class:

image

Just one method to go, not too frightening:

image 

 

Step 4 - Task parameterization

Before we dig into the Execute method we should think of adding some parameterization in order to communicate with the outside world. Although it's not really required, I bet there's little you can do without it... Parameters are simply properties on the class, more or less like parameters on cmdlets in PowerShell. Let's add a property using the prop snippet:

image

Press TAB twice and fill in the placeholders:

image

In reality you'd typically add (array) parameters of type ITaskItem because typically you'll want to reference certain files in the build system. ITaskItem is the gateway to do this but let's not go there for now. In order to make parameters required, simply add a RequiredAttribute on them. This requires importing Microsoft.Build.Framework:

image

 

Step 5 - Implementing functionality

Now it's time to provide the real functionality in the Execute method body. Let's do something simple, i.e. logging some message to the build system. In reality you'd manipulate files or so, possibly generating output (see the Output attribute) but simplicity is key in this post:

image

A simple two-liner: first log something in String.Format style, also specifying some importance level for the message (when running MSBuild you can control the "verbosity") and returning success (true) or failure.

 

Step 6 - Setting up debugging

Now comes the key take-away of this post: how to configure debugging? There are various ways of doing it, the one uglier than the other. But the following approach is pretty clean though. First, add a new item to the project (choose Add New Item in the context menu on the project node in Solution Explorer) and choose for XML file. Name it Debug.testproj:

image

In this file, add the following piece of XML:

image

Let's explain a few things:

  • On the Project node we refer to Debug in the DefaultTargets. We define this target a bit further in the Target node.
  • The UsingTask node is the most important one. Basically MSBuild loads tasks from assemblies using reflection. The TaskName attribute needs to match the class name in the assembly, in our case HelloWorldTask. To reference the assembly there are two options: AssemblyName to specify the name (e.g. MyTask, Version=1.0.0.0, PublicKeyToken=..., Culture=neutral) typically for tasks in the GAC or AssemblyFile to reference an assembly directly. We use the latter option, referring to the output of our class library project (make sure this references the the right folder where you created the project, suffixed by bin\Debug\assembly.dll).
  • Finally we define the Target with name Debug (as referred to in the Project node), calling our task. Calling a task consists of specifying its name as a tag and adding any of the parameters (in our case the required Name property) as attributes.

That's it:

image

Don't worry about the blue squirrel line, the MSBuild schema doesn't know about our custom task but that's fine. It will find it if you got the UsingTask declaration right (comparable to a using statement in C#).

Time to configure the debugger. Right-click your project and choose Properties. Go to the Debug tab and specify the following:

image

In the 'Start external program' textbox enter the path to your MSBuild.exe file. Make sure to use the one that matches the versions of the references assemblies in step 1 (I chose for the 3.5 assemblies, so I refer to %windir%\Microsoft.NET\Framework\v3.5\MSBuild.exe). Under 'Command line arguments' enter the path to the Debug.testproj file created above (you can find the path by marking the file in Solution Explorer and copying the Full Path property from the Properties pane).

 

Step 7 - Set a breakpoint and run

Time to test drive. Set a breakpoint in the code:

image

and hit F5. You'll see that MSBuild starts:

image

and our breakpoint is hit:

image

You can hover over the variables to get runtime information:

image

Press F10 to step to the next line and switch back to the MSBuild window:

image

Congratulations! You've successfully stepped through your first MSBuild task in the debugger.

 

Step 8 - Advanced debugging

Of course when more complex interactions with a complex build file are required, you'll need to do more "live debugging". In such a case multiple solutions exist:

  • Simply tweak the 'Command line arguments' in step 6 to point to the more complex build file and add your custom task in a similar way as described in step 6, hooking it up in the right spot.
  • Create a debugger assistant task that pops up a message box (MessageBox.Show) with a message "Attach debugger here" and hook it up in your to-be-tested project as a pre-build step. When launching the build (maybe even on an external machine using remote debugging) the message box will block further execution, allowing you to go to Debug, Attach to Process. You'll recognize the process to attach to by the message box's title "Attach debugger here". Set breakpoints and click OK on the message box and you're in business.
  • If you need to debug startup code in your task (such as a static constructor - which isn't really the best idea in most cases), the first approach will be the best since you're attaching to MSBuild right from the start.

Nevertheless, the technique outlined in this post should be good enough to cover most MSBuild custom task debugging cases.

 

Happy debugging!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Next month I'll be traveling back to Europe to speak at a couple of  Heroes Happen Herelaunch events (or to be more precise, at the TechDays conferences organized right after these). Here's my schedule:

If you're a hero that happens to attend one of these conferences over there I'm looking forward to see you in one (or more) of my sessions. I'll be presenting on a few topics (the exhaustive list is still to be compiled, I'll post it over here when it's available) such as WPF Futures, PowerShell 2.0, Custom LINQ Providers, Parallel Extensions and maybe ASP.NET MVC as well. Stay tuned!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Introduction

It should be a common introduction slogan for Windows PowerShell in the meantime, but let me repeat it: Windows PowerShell closes the gap between developers and IT Pros. Why do I make this statement (again)? Well, giving IT Pro people full access to the joys of the .NET Framework and richness of objects is certainly a good thing. The other way around, having developers think about making their products manageable by means of providers and cmdlets is certainly a good thing as well since it makes software products stand out much better in production environments.

Nevertheless, there are a few places where the developer's mindset and the ITPro's clash a little. On such place might look a little philosophical at first: invocation of operations on objects. The latter noun, "objects", is well-known in both camps. It's about something (an entity that can represent virtually anything, in the management space often referred to as a management object e.g. representing a mailbox) one can query data from (properties) and invoke operations on (methods). The latter aspect is the one we'll cover in this post.

 

Prefix or postfix? No big deal uh...

Why dedicate a post to method invocation? Let me divide the world in two artificial camps: the Prefix Club and the Postfix Club. The Prefix Club likes to say upfront what they want to do and then think about the subject of the operation. The Postfix Club tends to think the other way around: victim first, operation second. In the Prefix Club we find the IT Pros (amongst others of course):

c:\temp> type bar.txt
c:\temp> taskkill /PID 1234
c:\temp> dsadd <user> ...

Over in the Postfix Club we find quite some developers (excluding some rare groups that speak LISP etc):

File.OpenRead("bar.txt").ReadAllLines()
Process.GetProcessById(1234).Kill()
directoryEntry.Children.Add(<user>)

See the core difference? The main reason postfix works for developers is likely the tool support (IntelliSense anyone?) and although PowerShell has similar constructs (maybe not as visual as IntelliSense) there's still a difference in discoverability. PowerShell has basically two ways of performing management operations, the more natural one being the use of cmdlets that are designed to be discoverable by their name (verb-noun convention). The other one is invoking methods on objects, more in a developer style with discoverability through get-member (gm). Cmdlets are prefix by nature (verb = what, noun = target) while method invocations are - as mentioned before - postfix.

Taking a bit more distance, the funny thing of the recent set of language enhancements is the intrinsic support in C# 3.0 and VB 9.0 to revert the operation invocation order by means of extension methods, but we won't go that far in this post (maybe next time). Just think of the transform an extension method performs on an object:

victim.Operation(...)
Extensions.Operation(victim, ...)

 

Our mission

What about reducing the burden put on developers to make their objects manageable in a more natural PowerShell way? Assume you have some rich object that has a bunch of methods and you simply want to expose these methods as prefix-style cmdlets. Let's take a look at one that already exists in PowerShell: Stop-Process. Essentially the implementation of Stop-Process is fairly simple since it's base functionality is to, euhm, stop a process using .NET Framework functionality. I guess the Process.Kill method rings a bell:

Stop-Process p --> p.Kill()

Of course, cmdlets often provide a way to add more flexibility, e.g. by allowing a process name to be passed in instead of a PID or a Process instance, possibly using wildcards, etc. But having a way to expose .NET APIs in a natural way as cmdlets (without blocking later functionality enrichment) sounds compelling. This is what this post is all about.

Note: Make sure you have some basic understanding of cmdlets and the art of writing these (together with PowerShell snap-ins). Check out my Easy Windows PowerShell cmdlet development and debugging post for more information.

 

Methods as cmdlets - the basics

Cmdlets in PowerShell are written as classes that derive from the Cmdlet base class in System.Automation, with one core method called ProcessRecord. The simplest cmdlet looks like this:

[Cmdlet("Get", "Greeting")]
class
GetGreetingCmdlet : Cmdlet
{
   public override void ProcessRecord()
   {
      WriteRecord("Hello World");
   }
}

Essentially one could think of the cmdlet in a reverse way: how would this thing look like when encapsulated in a PowerShell-inaware class? Likely something like this:

static class MyStuff
{
   static string GetGreeting()
   {
      return "Hello World";
   }
}

Say we have such a class, how could we go ahead an expose it in PowerShell in a simple and straightforward way? Please welcome the method invocation cmdlet. What we'd like to achieve (and we'll extend this step by step in this and future posts) is this:

[Cmdlet("get", "greeting")]
class GetGreetingCmdlet : MethodCallCmdlet
{
   // We're missing essential information in this cmdlet's metadata: what type to invoke on?
}

Of course we need to think about a lot more than just this scenario: static methods versus instance methods, parameterization with overloading, etc. No worries, we'll get there eventually...

 

Static versus instance

How could we model static versus instance method calls? Let's start with the latter case. A way to allow instance methods is by taking the argument from the PowerShell pipeline and invoking the method on it. Say we want to encapsulate the String.Trim method, we could do it like this:

[Cmdlet("Trim", "String")]
class TrimStringCmdlet : MethodCallCmdlet
{
    [Parameter(Mandatory = true, ValueFromPipeline = true)]
    public string Input { get; set; }
}

Using it should be as straightforward as:

PS> "  Bart    " | trim-string
Bart

All the magic to invoke the right method would be encapsulated inside the MethodCallCmdlet from which we derive. How would it find the method? Either it needs a hint or it can work automagically by looking at the CmdletAttribute: try Trim() first, then try TrimString(), then fail. If there's a hint, that takes precedence like this:

[Cmdlet("Trim", "String"), InstanceMethod("Trim")]
class TrimStringCmdlet : MethodCallCmdlet
{
    [Parameter(Mandatory = true, ValueFromPipeline = true)]
    public string Input { get; set; }
}

Static methods need a hint anyway (the type to invoke the static method on) and don't consume input from the pipeline (for now...):

[Cmdlet("get", "greeting"), StaticMethod(typeof(MyStuff))]
class GetGreetingCmdlet : MethodCallCmdlet
{
}

 

Why this exercise?

There are various reasons. First of all, it has value by itself: the ability to encapsulate a method in a cmdlet in a straightforward manner seems to have some useful applications. Secondly, it just happened I was writing quite some plumbing code for some project on dynamic method invocations that largely applies over here (although the version I'm presenting here is greatly simplified concerning overloads and generic parameters). Also, this post will show some interesting real applications of LINQ to Objects that escape from the typical sorts of applications. Last but not least, it's part of a larger thing I have in mind that may or may not appear on the surface of the globe over time.

 

A design overview

I already explained how we're going to support static methods versus instance methods. Another thing to think about is parameterization: methods take arguments and have possibly overloads. The latter requires some advanced infrastructure to determine the right overload which on its turn requires intrinsic knowledge of the call that's taking place, so we won't focus on this right now as it can get fairly complicated. A more important thing though is establishing some mapping for these method arguments. A good way to do this seems to be the use of positional PowerShell parameters. For example, an instance method call with one parameter could look like this:

[Cmdlet("Add", "Days")]
public class AddDaysCmdlet : MethodCallCmdlet
{
    [Parameter(Mandatory = true, ValueFromPipeline = true)]
    public DateTime Input { get; set; }

    [Parameter(Mandatory = true, Position = 1)]
    public double NumberOfDays { get; set; }
}

The invocation of such a method call cmdlet would look like this:

PS> [DateTime]::Now | Add-Days -NumberOfDays 10
PS> [DateTime]::Now | Add-Days -N 10
PS> [DateTime]::Now | Add-Days 10

The distinction between Mandatory parameters and non-mandatory ones could be used for overloading and we'll provide some amount of plumbing for it although the finishing touch won't be there to distinguish between different overloads at runtime. It suffices to say that for such a thing you'd either need runtime information about the parameters that were provided to the cmdlet invocation or there'd need to be some über-Nullable concept that works on reference types as well (I'll leave it to the reader to figure out why this is a true statement :-)).

 

Custom attributes

Let's start by defining the custom attribute definitions we'll use in our implementation:

[AttributeUsage(AttributeTargets.Class, AllowMultiple = false, Inherited = false)]
public sealed class InstanceMethodAttribute : TargetMethodAttribute
{
    public InstanceMethodAttribute() : base(null, null) { }
    public InstanceMethodAttribute(string methodName) : base(methodName, null) { }
}

[AttributeUsage(AttributeTargets.Class, AllowMultiple = false, Inherited = false)]
public sealed class StaticMethodAttribute : TargetMethodAttribute
{
    public StaticMethodAttribute(Type type) : base(null, type) { }
    public StaticMethodAttribute(Type type, string methodName) : base(methodName, type) { }
}

public abstract class TargetMethodAttribute : Attribute
{
    protected TargetMethodAttribute(string methodName, Type staticType)
    {
        MethodName = methodName;
        StaticType = staticType;
    }

    public string MethodName { get; set; }
    public Type StaticType { get; set; }
}

These are pretty simply to understand. Although different designs are possible where there'd be just one custom attribute with some enum to denote the type, we'll stick with this design for now since it serves our purpose well.

 

Where all the magic happens: ProcessRecord

Small methods are key, so the following fragment might be a little disappointing for now, but this is where all the magic happens...

public class MethodCallCmdlet : Cmdlet
{
    protected override void ProcessRecord()
    {
        //
        // Implementing cmdlet's type.
        //
        Type cmdletType = GetType();

        //
        // Get custom attributes.
        //
        TargetMethodAttribute tma = GetTargetMethodAttribute(cmdletType);
        CmdletAttribute ca = GetCmdletAttribute(cmdletType);

        //
        // Static or not?
        //
        if (tma != null && tma.StaticType != null)
            ProcessRecordForStaticMethod(cmdletType, tma, ca);
        else
            ProcessRecordForInstanceMethod(cmdletType, tma, ca);
    }

    ...
}

As you know by now, cmdlets derive from Cmdlet (more advanced ones from PSCmdlet) and implement ProcessRecord. So that's what we do. Let's zoom in a little: first we'll load the metadata from the custom attributes, which is fairly trivial:

private static TargetMethodAttribute GetTargetMethodAttribute(Type cmdletType)
{
    //
    // Only one (subtype of) TargetMethodAttribute is allowed.
    //

    var tmas = cmdletType.GetCustomAttributes(typeof(TargetMethodAttribute), false).Cast<TargetMethodAttribute>();
    if (tmas.Count() > 1)
        throw new InvalidOperationException("Invalid number of TargetMethod attributes on " + cmdletType);

    return tmas.FirstOrDefault();
}

private static CmdletAttribute GetCmdletAttribute(Type cmdletType)
{
    //
    // Need to find the Cmdlet attribute.
    //
    var ca = cmdletType.GetCustomAttributes(typeof(CmdletAttribute), false).Cast<CmdletAttribute>().FirstOrDefault();
    if (ca == null)
        throw new InvalidOperationException("Missing Cmdlet attribute");

    return ca;
}

Notice the lightweight use of LINQ to cast the sequence of retrieved attributes and to extract the first one. For our TargetMethodAttribute we need to do a bit more checking because our class-hierarchy allows multiple applications of the subclass attributes, something would be avoided in a design with just one custom attribute (though such a design would require a few less-intuitive constructor overloads for the custom attribute).

The real stuff goes on in ProcessRecordFor*Method. These two methods are fairly simple as well:

private void ProcessRecordForStaticMethod(Type cmdletType, TargetMethodAttribute tma, CmdletAttribute ca)
{
    //
    // Do real processing.
    //

    ProcessRecordInternal(cmdletType, tma, ca, tma.StaticType, null, BindingFlags.Static);
}

private void ProcessRecordForInstanceMethod(Type cmdletType, TargetMethodAttribute tma, CmdletAttribute ca)
{
    //
    // Determine the lhs "this" parameter (the one bound to the pipeline).
    //
    object target = GetThisParameter(cmdletType);

    //
    // Do real processing.
    //
    ProcessRecordInternal(cmdletType, tma, ca, target.GetType(), target, BindingFlags.Instance);
}

Don't worry about the parameters yet, it will become clear soon. Parameters 4 to 6 are the ones that differ. The last one is fairly intuitive and passes on the type of method we're looking for. Parameters 4 and 5 denote the declaring type of the method (which is straightforward in the static case since it's specified in the StaticMethodAttribute) and the invocation target respectively.

 

The instance method case: who are we calling?

That's what the GetThisParameter method is for. It basically searches for the ParameterAttribute in the current class that is marked as ValueFromPipeline. There's a little bit of magic going on here:

private object GetThisParameter(Type cmdletType)
{
    //
    // Get the property with a ParameterAttribute
    //
    var pi = (from prop in cmdletType.GetProperties()
              let pa = (ParameterAttribute)prop.GetCustomAttributes(typeof(ParameterAttribute), false).SingleOrDefault()
              where pa != null && pa.ValueFromPipeline
              select prop).SingleOrDefault();

    if (pi == null)
        throw new InvalidOperationException("Couldn't find cmdlet parameter bound to the pipeline");

    //
    // Get the underlying object. In case it's a PSObject, unwrap it.
    //
    object target = pi.GetValue(this, null);
    PSObject psTarget = target as PSObject;
    if (psTarget != null)
        target = psTarget.BaseObject;

    return target;
}

Woohoo! A real LINQ expression :-). In the first line we get the one and only Parameter that's bound to the pipeline. Basically we retrieve the set of ParameterAttribute attributes on all the properties individually and check whether or not the Parameter has the ValueFromPipeline property set. SingleOrDefault returns null if there's no result or the result otherwise (and throws an exception if there's more than one instance - a case we don't check explicitly in here). The second part deserves some explanation as well. Since PowerShell encapsulates objects in a PSObject in certain cases we detect this special case since we want to call a method on the real object, not on some wrapper around it. So if we see a PSObject, we unwrap the object that's nested inside.

 

Where all the magic happens (seriously!): ProcessRecordInternal

This is the real core of the cmdlet:

private void ProcessRecordInternal(Type cmdletType, TargetMethodAttribute tma, CmdletAttribute ca, Type targetType, object target, BindingFlags flags)
{
    //
    // Get candidate overloads.
    //
    var overloads = GetMethodOverloads(tma, ca, targetType, flags);

    //
    // Get the cmdlet method parameters.
    //
    PropertyInfo[] reqParams;
    PropertyInfo[] optParams;
    GetMethodParameters(cmdletType, flags == BindingFlags.Instance, out reqParams, out optParams);

    //
    // Find the right overload. First longest match is our preference.
    // Notice we don't figure out which parameters were specified, so there's no real notion of runtime overloading.
    //
    MethodInfo match = (from overload in overloads
                        orderby overload.Method.GetParameters().Length descending
                        where overload.Matches(reqParams, optParams)
                        select overload.Method).FirstOrDefault();
    if (match == null)
        throw new InvalidOperationException("No suitable overload found.");

    //
    // Invoke the method and return the result.
    //

    WriteObject(Invoke(match, target, reqParams, optParams));
}

Here we do some lightweight (as mentioned before) form of overload resolution. It's needed anyhow to determine a good match but its behavior isn't really dynamic at runtime (meaning that omitting a certain parameter causes another method overload to be called dynamically). In fact, method call cmdlets could do some preprocessing of the cmdlet metadata and keep it aside for subsequent calls in order to optimize for performance, but we won't go that far in this post.

 

Finding overloads

Finding overloads happens through reflection once more and again we're going to use LINQ. We represent a method overload by an instance of MethodOverload (original isn't it?) that captures the MethodInfo as well as an array of types for the parameters. This is merely a convenience thing.

private static IEnumerable<MethodOverload> GetMethodOverloads(TargetMethodAttribute tma, CmdletAttribute ca, Type type, BindingFlags flags)
{
    flags |= BindingFlags.Public;

    //
    // Find suitable overloads on the specified type with the specified candidate names.
    //
    return from methodName in GetMethodNames(tma, ca)
           join mi in type.GetMethods(flags) on methodName equals mi.Name
           select new MethodOverload() {
               Method = mi,
               Parameters = (from p in mi.GetParameters() select p.ParameterType).ToArray()
           };
}

private static IEnumerable<string> GetMethodNames(TargetMethodAttribute tma, CmdletAttribute ca)
{
    //
    // Hint specified?
    //
    if (tma != null && tma.MethodName != null)
        yield return tma.MethodName;

    //
    // Intelligent fallback defaults.
    //
    yield return ca.VerbName + ca.NounName;
    yield return ca.VerbName;
}

The GetMethodNames finds an ordered list of best matches for method names (one could implement a more stricter policy of course) and a join with the real methods available on the specified type takes care of finding the suitable overloads which are projected as MethodOverload instances. The base definition of MethodOverload looks like this although we'll extend it in a minute:

class MethodOverload
{
    public MethodInfo Method { get; set; }
    public Type[] Parameters { get; set; }

    ...
}

 

Determining method parameters from the cmdlet metadata

This task is again one we can use LINQ for:

private static void GetMethodParameters(Type cmdletType, bool hasThisParam, out PropertyInfo[] mandatory, out PropertyInfo[] optional)
{
    var res = from p in
                  (from prop in cmdletType.GetProperties()
                   let pa = (ParameterAttribute)prop.GetCustomAttributes(typeof(ParameterAttribute), false).SingleOrDefault()
                   where pa != null
                   select new { Property = prop, Attribute = pa })
              where !p.Attribute.ValueFromPipeline && p.Attribute.Position > (hasThisParam ? 0 : -1)
              orderby p.Attribute.Position
              select p;

    mandatory = res.TakeWhile(p => p.Attribute.Mandatory).Select(p => p.Property).ToArray();
    optional = res.SkipWhile(p => p.Attribute.Mandatory).Select(p => p.Property).ToArray();
}

In the inner query we project all the cmdlet type's properties that have a ParameterAttribute onto an anonymous type that holds the PropertyInfo and the ParameterAttribute. In the outer query we filter out parameters from the pipeline (these are reserved as the "this" parameter for instance method calls as explained above) and the lifted "this" parameter on position 0 in case there's such a parameter (currently the hasThisParam method argument is only set when we handle instance method calls, but there's another case where this could be useful... tip: new Orcas language features). Furthermore we order by Position to get the ordered list of parameters that we'll feed into the underlying method based on the mapping scheme.

Finally, we distinguish between mandatory and optional parameters and piggyback on the characteristic that mandatory parameters should come before optional ones, at least when we consider positional parameters (something we should validate in a real production quality implementation, to make sure no "pure named" parameters are present). To retrieve both lists of parameter categories we can take advantage of the TakeWhile and SkipWhile LINQ extension methods.

 

The final logic before ...

Now all the metadata extracting machinery is in place we can focus on matching the right overload and calling the method. We saw the code before but let's repeat it:

    //
    // Find the right overload. First longest match is our preference.
    // Notice we don't figure out which parameters were specified, so there's no real notion of runtime overloading.
    //
    MethodInfo match = (from overload in overloads
                        orderby overload.Method.GetParameters().Length descending
                        where overload.Matches(reqParams, optParams)
                        select overload.Method).FirstOrDefault();
    if (match == null)
        throw new InvalidOperationException("No suitable overload found.");

    //
    // Invoke the method and return the result.
    //

    WriteObject(Invoke(match, target, reqParams, optParams));

The matching logic is straightforward once more as it relies for it's core functionality on a method called Matches defined on the MethodOverload class:

internal bool Matches(PropertyInfo[] reqParams, PropertyInfo[] optParams)
{
    //
    // Should have at least as much parameters as the number or required parameters.
    //
    if (reqParams.Length > Parameters.Length)
        return false;

    //
    // Check required parameters. All need to match.
    //
    int i;
    for (i = 0; i < reqParams.Length; i++)
    {
        if (!MatchParameter(reqParams[i], Parameters[i]))
            return false;
    }

    //
    // Check optional parameters. If we run out of parameters before matching all optional parameters, that's fine. However, a mismatch causes termination.
    //
    int j;
    for (j = 0; j < optParams.Length && i < Parameters.Length; j++, i++)
    {
        if (!MatchParameter(optParams[j], Parameters[i]))
            return false;
    }

    //
    // Only valid if no parameters left.
    //
    return i >= Parameters.Length;
}

private static bool MatchParameter(PropertyInfo p, Type parameterType)
{
    //
    // Assignable is okay.
    //

    return (p.PropertyType.IsAssignableFrom(parameterType));
}

This implementation takes in the sets of required and optional parameters and tries to match the required ones (which are - what's in a name - required) and as much of the optional ones as it can find. I decided to leave this machinery (which is essentially ported from some other dynamic language project I'm doing) in although such overload resolution isn't fully implemented in this sample. If you're interested in more details, feel free to drop me a mail.

Note: This is something that easily deserves lots of refinements (e.g. to allow parameters with different types to match convertible types - e.g. assigning an int to a long parameter) but let's stick with simplicity for the purpose of this post. Other refinements (that go definitely out of the scope of this post) could handle translation of PS ScriptBlocks into anonymous methods (lambda BLOCKED EXPRESSION which will become more or less possible with the new PSParser class in PowerShell 2.0.

 

... we finally invoke the method

Simplicity at last:

private object Invoke(MethodInfo match, object target, PropertyInfo[] reqParams, PropertyInfo[] optParams)
{
    //
    // Get the parameters.
    //

    object[] parameters = (from p in reqParams.Concat(optParams).Take(match.GetParameters().Length)
                           select p.GetValue(this, null)).ToArray();

    //
    // Invoke the method.
    //
    WriteVerbose(match.ToString() + " on " + match.DeclaringType.ToString());
    return match.Invoke(target, parameters);
}

The last line does the real stuff while the first line of code prepares the call by grabbing all the required Parameter property values again using a simple LINQ statement.

 

Action!

image

The cmdlets used in the sample are shown below:

[Cmdlet("Get", "Greeting"), StaticMethod(typeof(MyStuff))]
public class GetGreetingCmdlet : MethodCallCmdlet
{
}

[Cmdlet("Add", "Days")]
public class AddDaysCmdlet : MethodCallCmdlet
{
    [Parameter(Mandatory = true, ValueFromPipeline = true)]
    public DateTime Input { get; set; }

    [Parameter(Mandatory = true, Position = 1)]
    public double NumberOfDays { get; set; }
}

[Cmdlet("Trim", "String")]
public class TrimStringCmdlet : MethodCallCmdlet
{
    [Parameter(Mandatory = true, ValueFromPipeline = true)]
    public string Input { get; set; }
}

[Cmdlet("Replace", "String")]
public class ReplaceStringCmdlet : MethodCallCmdlet
{
    [Parameter(Mandatory = true, ValueFromPipeline = true)]
    public string Input { get; set; }

    [Parameter(Mandatory = true, Position = 1)]
    public string Old { get; set; }

    [Parameter(Mandatory = true, Position = 2)]
    public string New { get; set; }
}

The creation of these wrapper cmdlets is even so mechanical you could write a reflection-based tool for it (hint!). Update: I've uploaded the sources for download over here. Keep in mind this is sample code, so treat it as such (no guarantees are made).

Enjoy!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

More Posts Next page »