August 2006 - Posts

Introduction

One of the goals of workflows in general is to make "logic", "business processes", etc more visible by having a graphical representation of these. At design time it's pretty easy to compose a workflow using the Visual Studio 2005 designer but to this extent a workflow stays pretty static. So what about adapting or modifying a workflow at runtime? In this series of posts I'll outline various methodologies to modify a running workflow. For part one of this series, take a look over here.

Modification from the outside

In the previous part we've been looking at workflow modification from the inside. One of the problems with such an approach is that the workflow has to be prepared in some way or another to allow changes. That is, code must be present inside the workflow to make the changes. For that reason it might be more interesting in some scenarios to make the change from the ouside. This is what we'll be looking at for now.

Consider the workflow from the previous part again, but now with the first set of activities disabled as shown below. Of course you can just neglect those and create a new empty workflow and add the non-disabled stuff. I just want to point out the powerful "comment out" feature in WF once more :-).

Again we stick with simple CodeActivity activities, this time with the following ExecuteCode definitions:

private void killMeFromOutsideActivity_ExecuteCode(object sender, EventArgs e)
{
   Console.WriteLine("I should have been dead by now!"
);
}

private void HappyEndActivity_ExecuteCode(object sender, EventArgs
e)
{
   Console.WriteLine("The happy end :-)"
);
}

In order to change the workflow from the outside we need to reach a (what I use to call) "safe point". Tol illustrate this, I'm showing you the use of a SuspendActivity. Over to the hosting code now:

using(WorkflowRuntime workflowRuntime = new WorkflowRuntime())
{
   AutoResetEvent waitHandle = new AutoResetEvent(false
);
workflowRuntime.WorkflowCompleted +=
delegate(object sender, WorkflowCompletedEventArgs
e) {waitHandle.Set();};
   workflowRuntime.WorkflowTerminated +=
delegate(object sender, WorkflowTerminatedEventArgs
e)
   {
      Console
.WriteLine(e.Exception.Message);
      waitHandle.Set();
   };

   workflowRuntime.WorkflowSuspended +=
new EventHandler<WorkflowSuspendedEventArgs
>(workflowRuntime_WorkflowSuspended);

   Dictionary<string, object> arguments = new Dictionary<string, object
>();
   arguments.Add(
"AllowUpdates", true
);

   WorkflowInstance
instance = workflowRuntime.CreateWorkflow(typeof(DynamicWf.Workflow1), arguments);
   instance.Start();

   waitHandle.WaitOne();
}

Most of the code is the same as in part 1. Recall that the AllowUpdates argument is used to determine whether updates are allowed or not by using it inside the workflow's DynamicUpdateCondition Code Condition. So, you can just ignore this for now and in case you're starting from an empty workflow, just ignore this piece of code.

The most important piece of code however is the following:

   workflowRuntime.WorkflowSuspended += new EventHandler<WorkflowSuspendedEventArgs>(workflowRuntime_WorkflowSuspended);

with the following corresponding event handler:

static void workflowRuntime_WorkflowSuspended(object sender, WorkflowSuspendedEventArgs e)
{
   Console.WriteLine("I'm suspended"
);

   WorkflowInstance
workflowInstance = e.WorkflowInstance;
   Activity
wRoot = workflowInstance.GetWorkflowDefinition();

   WorkflowChanges changes = new WorkflowChanges
(wRoot);
   changes.TransientWorkflow.Activities.Remove(changes.TransientWorkflow.Activities[
"KillMeFromOutsideActivity"
]);

   try
   {
      workflowInstance.ApplyWorkflowChanges(changes);
   }
   catch (InvalidOperationException ex)
   {
      Console.ForegroundColor = ConsoleColor.Red;
      Console.WriteLine("No update allowed - " + ex.Message);
      Console.ResetColor();
   }


   Console.WriteLine("Let's resume"
);
   e.WorkflowInstance.Resume();
}

The code displayed above should be pretty self-explanatory. Again the WorkflowChanges class is at the center of the update logic. In order to obtain a reference to the running (correction: at this point suspended) workflow we can use the WorkflowSuspendedEventArgs argument's WorkflowInstance property.

Note: An important thing is to call ApplyWorkflowChanges on the workflow object of which the root activity (GetWorkflowDefinition) was passed to the WorkflowChanges constructor. I've run into some troubles when experimenting with this and the code above is the only one which is correct.

The result should look like this:

Again, when making the workflow's DynamicUpdateCondition evaluate to false (see previous episode), as shown below, the update will fail:

   Dictionary<string, object> arguments = new Dictionary<string, object>();
   arguments.Add(
"AllowUpdates", false);

Happy WF-ing once again!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Introduction

Conditional compilation is one of the much unknown powerful features that are available in .NET and more specifically in the C#, VB.NET and J# compilers. In this post, I'd like to show you how to take benefit from this feature, applied to C#.

An example

using System;
using
System.Diagnostics;

class
Program
{
   static void Main(string
[] args)
   {
      DebugLog(
"Before loop"
);

      for (int
i = 0; i < 10; i++)
         Console
.WriteLine(i);

      DebugLog(
"After loop"
);
   }

   [
Conditional("DEBUG")]
   private static void DebugLog(string
s)
   {
      Console
.WriteLine(s);
   }
}

First compile the program using one of the following options:

  1. In Visual Studio 2005 with the active configuration set to Debug.
  2. On the command-line using csc /define:DEBUG Program.cs. (Note: this is not the same as csc /debug+ Program.cs)
  3. Add #define DEBUG in the program code.

When you execute the program, you'll see:

Before loop
0
1
2
3
4
5
6
7
8
9
After loop

In the IL code you'll see something like this:

IL_0000: nop
IL_0001: ldstr "Before loop"
IL_0006: call void Program::DebugLog(string)
IL_000b: nop
IL_000c: ldc.i4.0
IL_000d: stloc.0
IL_000e: br.s IL_001b
IL_0010: ldloc.0
IL_0011: call void [mscorlib]System.Console::WriteLine(int32)
IL_0016: nop
IL_0017: ldloc.0
IL_0018: ldc.i4.1
IL_0019: add
IL_001a: stloc.0
IL_001b: ldloc.0
IL_001c: ldc.i4.s 10
IL_001e: clt
IL_0020: stloc.1
IL_0021: ldloc.1
IL_0022: brtrue.s IL_0010
IL_0024: ldstr "After loop"
IL_0029: call void Program::DebugLog(string)
IL_002e: nop
IL_002f: ret

Nothing special going on despite the declaration of the [Conditional("DEBUG")] attribute. However, let's compile the same application now as a release build:

  1. In Visual Studio 2005 with the active configuration set to Release.
  2. On the command-line using csc Program.cs.

Now the output will be:

0
1
2
3
4
5
6
7
8
9

and the corresponding IL is:

IL_0000: nop
IL_0001: ldc.i4.0
IL_0002: stloc.0
IL_0003: br.s IL_0010
IL_0005: ldloc.0
IL_0006: call void [mscorlib]System.Console::WriteLine(int32)
IL_000b: nop
IL_000c: ldloc.0
IL_000d: ldc.i4.1
IL_000e: add
IL_000f: stloc.0
IL_0010: ldloc.0
IL_0011: ldc.i4.s 10
IL_0013: clt
IL_0015: stloc.1
IL_0016: ldloc.1
IL_0017: brtrue.s IL_0005
IL_0019: ret

So, no single call to the DebugLog method was emitted in the Main's code. Powerful isn't it? The coolest thing about all this is that you don't have to decorate your code using a bunch of #if statements, like this:

   static void Main(string[] args)
   {
#if DEBUG
      DebugLog(
"Before loop"
);
#endif

      for (int
i = 0; i < 10; i++)
         Console
.WriteLine(i);

#if DEBUG
      DebugLog("After loop");
#endif
   }

One advantage of the latter approach might be that you can also #if DEBUG ... #endif the DebugLog method itself. Using conditional compilation the DebugLog method will get compiled no matter what.

A few remarks to conclude:

  • The ConditionalAttribute can only be applied to methods and to attribute classes (that is: classes that derive from System.Attribute). Thus property and event accessors can't be decorated using this attribute.
  • The Debug and Trace classes of the .NET Framework use the ConditionalAttribute. So you don't have to worry about any performance hit whatsoever when you call various methods of these classes as a debugging aid. These calls just won't make it in the release build (or better: non-debug builds), e.g.:

    [Conditional("DEBUG")]
    public static void Assert (
        bool condition
    )

  • The C++ compiler doesn't support the ConditionalAttribute; you'll have to rely on #if conditionals to include/exclude debugging code.
  • The ConditionalAttribute allows multiple decorations per method (or attribute class):

    [AttributeUsageAttribute(AttributeTargets.Class|AttributeTargets.Method, AllowMultiple=true)]

    Therefore you can do things such as (cf. section 17.4.2.2 of the C# specification):

       [Conditional("ALPHA")]
       [Conditional("BETA")]
       private static void DebugLog(string
    s)
  • Attribute decorations are emitted to the metadata of a class. Because of this, you can get ConditionalAttribute working across languages and assemblies. An example of a class definition in C#:

    public static class Helper
    {
       [Conditional("DEBUG")]
       public static void DebugLog(string
    s)
       {
          Console
    .WriteLine(s);
       }
    }


    and its usage in VB:

    Class Program
       Shared Sub Main()
          Helper.DebugLog(
    "Before loop"
    )

          Dim As Integer
          For
    i = 0 To 10
             Console
    .WriteLine(i)

          Helper.DebugLog(
    "After loop"
    )
       End Sub
    End Class

    This will yield the same result as the C# only example, because the VB compiler can find out (using the metadata of class Helper) that calls to DebugLog should only be made when a DEBUG build is made.

References

Enjoy!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

The problem statement

One of my friends asked me how to implement some kind of "process monitor" that makes sure a (system) process stays alive. In this sample I'm showing you how to implement such a thing in C#. An alternative situation would be to monitor a Windows Service but the SCM (Service Control Manager) in Windows can take care of that (tab Recovery).

The code

using System;
using
System.Diagnostics;

class
Demo
{
   public static void
Main()
   {
      Launch();
      Console
.ReadLine();
   }

   private static void
Launch()
   {
      ProcessStartInfo psi =
new
ProcessStartInfo();
      psi.FileName =
"notepad.exe"
;

      Process p =
new
Process();
      p.StartInfo = psi;
      p.EnableRaisingEvents =
true
;
      p.Exited +=
LaunchAgain; //C# 2.0 syntax - alternative: p.Exited += new EventHandler(LaunchAgain);

      p.Start();
   }

   private static void LaunchAgain(object o, EventArgs
e)
   {
      Console.WriteLine("Process was killed; launching again"
);
      Launch();
   }
}

!!! Warning !!!

  1. Don't use this to annoy people with a non-disappearing prgram.
  2. Of course, the monitoring process can still crash (or be terminated). This is a chicken-egg situation but of course you can't create an endless chain of monitors.
  3. Security matters! A process that crashes can be the indication of a security problem or risk (e.g. a compromised service). In case of unmanaged code a buffer overrun (maybe as part of an attack) could be the reason the process stops. So, you shouldn't restart the process forever. It's better to have a maximum amount of automatic process restarts, just like the SCM only permits three service restarts.

Keep it alive and have fun!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Passioned about Visual Studio 2005? Then this should be something for you. On Friday 8 September, Steven Wilssens will deliver a full day training on Visual Studio 2005 Team System for the Visual Studio User Group Belgium (VISUG). More information can be found on www.visug.be. Unfortunately I'm unable to attend, but hope to see you on some other Belgian event soon.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Introduction

For quite a while now, I'm running on Windows Vista build 5472 as my daily operating system. Without big troubles, so I'm a happy man. However, I couldn't resist to download build 5536 a couple of days ago to give it a try. I'll install it on my main machine on a separate harddisk later on (especially to take benefit from my graphics hardware) but for now I decided to run it in a virtualized environment. Here's how.

The situation

  • Windows Vista build 5472 as primary OS on my laptop (Dell Inspiron 9400), running various betas such as the 2007 Microsoft Office System.
  • Goal: running Windows Vista build 5536 on top of that in Virtual Server 2005.

The procedure

  1. Go to http://connect.microsoft.com and download the latest build of Virtual Server 2005 R2 SP1.
  2. Make sure you have installed IIS7 (see Windows Components) on the machine and install Virtual Server 2005 R2 SP1.
  3. After the installation, opening the Virtual Server 2005 Administration Website will fail:



    To solve this, go to the start menu, find a shortcut to Internet Explorer under All Programs, right-click and choose "Run as administrator". The website should appear fine then.
  4. Create a new virtual machine but allocate enough memory (in my case I've allocated 1GB of RAM).
  5. Put the burned ISO (vista_5536.16385.060821-1900_vista_rc1_x86fre_client-LR1CFRE_EN_DVD.iso) in the DVD drive of the system. Note: I've burned the ISO to a DVD-RW disc because I want to run the setup on other machines too. I didn't try to mount the ISO file (VS2005 has or had a limitation of a maximum of 2.2GB ISO files). You might be successful to mount the ISO under the guest OS using some mount tool and use that "virtual drive" in Virtual Server to start setup from.
  6. Boot the virtual machine and run the installation. This will take quite some time to complete. Be patient.
  7. When the installation has completed, Vista will run very slow inside the virtual machine. This is not due to the beta but due to the lack of Virtual Machine Additions. The solution: go back to http://connect.microsoft.com, again to the Virtual Server 2005 R2 SP1 beta program and download the Virtual Machine Additions for Vista Beta 2. The site mentions compatibility up to build 5472 but it seems to work very well on build 5536 too:

Have fun!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Introduction

One of the goals of workflows in general is to make "logic", "business processes", etc more visible by having a graphical representation of these. At design time it's pretty easy to compose a workflow using the Visual Studio 2005 designer but to this extent a workflow stays pretty static. So what about adapting or modifying a workflow at runtime? In this series of posts I'll outline various methodologies to modify a running workflow.

Modification from the inside

One of the options to change a workflow is doing it from the inside. Basically this means that the "basic" logic of the workflow foresees that something might have to be changed and that the workflow modifies itself (from the inside) when certain criteria are met. Assume the following workflow but ignore the disabled activities (marked with a green background) - we'll examine these in another blog post because these illustrate modifying a workflow from the outside:

We have - for illustration purposes and to keep things simple - four CodeActivity activities and one DynamicSequenceActivity. Under normal (non-modification) circumstances the system would execute these one after another in a sequential order; Our goal is to add and remove activities in this workflow.

Modification 1 - Adding another activity

Assume we want to add another activity dynamically in between the SelfModificationActivity and the AnotherModificationActivity (you could choose another position too however). An ideal place to do that is inside the SelfModificationActivity, in a real situation based on some decision logic. In our case, we're just going to make the change under any circumstance. Take a look at the ExecuteCode handler logic for the SelfModificationActivity:

private void SelfModificationActivity_ExecuteCode(object sender, EventArgs e)
{
   Console.WriteLine("This is the SelfModificationActivity speaking"
);

   WorkflowChanges wc = new WorkflowChanges(this
);
   MyActivity ma = new MyActivity("With greetings from SelfModificationActivity"
);
   wc.TransientWorkflow.Activities.Insert(1, ma);

   try
   {
      this
.ApplyWorkflowChanges(wc);
   }
   catch (InvalidOperationException
ex)
   {
      Console.ForegroundColor = ConsoleColor.Red;
      Console.WriteLine("No update allowed - "
+ ex.Message);
      Console
.ResetColor();
   }
}

The most important class to notice in here is the WorkflowChanges class. As the API describes this class is the wrapper around a set of proposed changes to a workflow. Therefore applying the changes using the ApplyWorkflowChanges method can throw an exception - something I'll illustrate later on. In this piece of code I'm just adding another activity on position 1 of the workflow (that is, immediately after the SelfModificationActivity). This index-based modification might not be the most ideal way of working, but it's a nice mind-setting example to start with.

The code displayed above inserts an activity of type MyActivity. What's going on in there isn't very important for the sake of the demo, but here is a possible definition for our demo:

public partial class MyActivity: SequenceActivity
{
   private string
message;

   public
MyActivity()
   {
      InitializeComponent();
   }

   public MyActivity(string message) : this
()
   {
      this
.message = message;
   }

   private void MainActivity_ExecuteCode(object sender, EventArgs
e)
   {
     
Console.WriteLine("This is the MyActivity speaking - "
+ message);
   }
}

Modification 2 - Deleting an activity

Deleting an activity can be an interesting option to skip certain activities under some circumstances where logic isn't expressed in the workflow itself (you might consider an if-else structure in the workflow to get a similar and designer-visible effect). Nevertheless, let's assume you want to remove an activity from the running workflow instance at runtime based on some conditions (of which the logic might be loaded at runtime, e.g. using reflection). More specifically I want to kill the DeadManWalkingAcitivity:

private void DeadManWalkingActivity_ExecuteCode(object sender, EventArgs e)
{
   Console.WriteLine("If you can see this, you are God :o"
);
}

The code to remove this activity will be added to the AnotherModificationActivity's ExecuteCode handler:

private void AnotherModificationActivity_ExecuteCode(object sender, EventArgs e)
{
   Console.WriteLine("Let's kill the dead man"
);

   WorkflowChanges wc = new WorkflowChanges(this
);
   Activity deadman = wc.TransientWorkflow.GetActivityByName("DeadManWalkingActivity"
);
   wc.TransientWorkflow.Activities.Remove(deadman);

   try
   {
      this
.ApplyWorkflowChanges(wc);
   }
   catch (InvalidOperationException
ex)
   {
      Console.ForegroundColor = ConsoleColor.Red;
      Console.WriteLine("No update allowed - "
+ ex.Message);
      Console
.ResetColor();
   }
}

Again, the general approach is the same. Use a WorkflowChanges object, manipulate the TransientWorkflow's Activities collection (in this case by calling Remove) and call ApplyWorkflowChanges passing in the WorkflowChanges object with proposed changes. In this piece of code however, the activity to be removed is retrieved by name, not by some index number using GetActivityByName.

Modification 3 - Using the SequenceActivity

A third possibility is to foresee a placeholder in which activities will be loaded dynamically at runtime. Instead of filling out the "Drop activities here" at design time, we'll add activities in there at runtime. For demo purposes, the code to do this will be added to the DynamicSequenceActivity CodeActivity:

private void DynamicSequenceActivity_ExecuteCode(object sender, EventArgs e)
{
   if (dynamicActivityType == null
)
      return
;

   Type t = Type
.GetType(dynamicActivityType);

   WorkflowChanges wc = new WorkflowChanges(this
);
   Activity a = t.Assembly.CreateInstance(t.FullName) as Activity
;
   ((
SequenceActivity)wc.TransientWorkflow.GetActivityByName("DynamicSequence"
)).Activities.Add(a);

   try
   {
      this
.ApplyWorkflowChanges(wc);
   }
   catch (InvalidOperationException
ex)
   {
      Console.ForegroundColor = ConsoleColor.Red;
      Console.WriteLine("No update allowed - "
+ ex.Message);
      Console
.ResetColor();
   }
}

Instead of just statically adding some activity, we're using reflection in the demo above to load an activity from the type specified as "dynamicActivityType". Again for demo purposes, this variable is just a simple property in the workflow class:

private string dynamicActivityType;

public string
DynamicActivityType
{
   get { return
dynamicActivityType; }
   set { dynamicActivityType = value
; }
}

This property is set when starting the workflow (e.g. upon calling a webservice method) but one can imagine various sources to get this property value from (e.g. a database). To illustrate setting this property, look at the following piece of demo code:

static void Main(string[] args)
{
   using(WorkflowRuntime workflowRuntime = new WorkflowRuntime
())
   {
      AutoResetEvent waitHandle = new AutoResetEvent(false
);
      workflowRuntime.WorkflowCompleted +=
delegate(object sender, WorkflowCompletedEventArgs
e) {waitHandle.Set();};
      workflowRuntime.WorkflowTerminated +=
delegate(object sender, WorkflowTerminatedEventArgs
e)
      {
         Console
.WriteLine(e.Exception.Message);
         waitHandle.Set();
      };

      Dictionary<string, object> arguments = new Dictionary<string, object
>();
     
arguments.Add("DynamicActivityType", "DynamicWf.DynamicallyLoadedActivity, DynamicWf"
);

      WorkflowInstance instance = workflowRuntime.CreateWorkflow(typeof(DynamicWf.Workflow1
), arguments);
      instance.Start();

      waitHandle.WaitOne();
   }
}

In this case, I'm just referring to "DynamicWf.DynamicallyLoadedActivity, DynamicWf" in the same assembly but you can make it much more flexible of course. You can think of some "DynamicallyLoadedActivity" yourself, just define a custom activity of your choice. An example is (how original again :$) a one-CodeActivity-wrapping custom activity with the CodeActivity's ExecuteCode set so:

private void codeActivity1_ExecuteCode(object sender, EventArgs e)
{
   Console.ForegroundColor = ConsoleColor.Green;
   Console.WriteLine("Greetings from a dynamic friend!"
);
   Console
.ResetColor();
}

The result

Executing the workflow constructed above (ignore the disabled activities for a while) yields the following result:

This is exactly the result we expected: the MyActivity was inserted on the right position; the "dead man activity" wasn't executed and another activity (DynamicallyLoadedActivity) was - as the name implies - loaded dynamically using reflection.

Changes allowed?

You might be concerned about the fact that a workflow seems to allow dynamic updates at all times, especially when we'll be looking (in a next post) at modifications from the outside. The good news is that there is a way for a workflow to decide on whether it allows updates to be made or not. This is done by setting the DynamicUpdateCondition property of the defined workflow. You can use a "Declarative Rule Condition" or a "Code Condition". Let's choose the latter option for now:

The corresponding code looks as follows:

private void CanBeUpdated(object sender, ConditionalEventArgs e)
{
   e.Result = allowUpdates;
}

Making the decision whether updates are allowed or not can be a complex piece of code of course, but let's stick to simplicity again and just use a boolean property for this purpose:

private bool allowUpdates;

public bool
AllowUpdates
{
   get { return
allowUpdates; }
   set { allowUpdates = value
; }
}

The decision is then communicated back using the ConditionalEventArgs's Result property. To illustrate the result in case of dynamic update denial, consider the following host code that sets the AllowUpdates property at startup of the workflow instance:

static void Main(string[] args)
{
   using(WorkflowRuntime workflowRuntime = new WorkflowRuntime
())
   {
      AutoResetEvent waitHandle = new AutoResetEvent(false
);
      workflowRuntime.WorkflowCompleted +=
delegate(object sender, WorkflowCompletedEventArgs
e) {waitHandle.Set();};
      workflowRuntime.WorkflowTerminated +=
delegate(object sender, WorkflowTerminatedEventArgs
e)
      {
         Console
.WriteLine(e.Exception.Message);
         waitHandle.Set();
      };

      Dictionary<string, object> arguments = new Dictionary<string, object
>();
      arguments.Add("AllowUpdates", false);

     
arguments.Add("DynamicActivityType", "DynamicWf.DynamicallyLoadedActivity, DynamicWf"
);

      WorkflowInstance instance = workflowRuntime.CreateWorkflow(typeof(DynamicWf.Workflow1
), arguments);
      instance.Start();

      waitHandle.WaitOne();
   }
}

When executing the code now, you'll see the following:

As you can see, every time the ApplyWorkflowChanges method is called, the dynamic update condition is evaluated. In case it evaluates to false, an InvalidOperationException is thrown, which gets caught by our update-code that foresees this possibility:

   try
   {
      this
.ApplyWorkflowChanges(wc);
   }
   catch (InvalidOperationException
ex)
   {
      Console.ForegroundColor = ConsoleColor.Red;
      Console.WriteLine("No update allowed - "
+ ex.Message);
      Console
.ResetColor();
   }

Happy WF-ing!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

The story

In the beta 2 ages of Windows Vista, I decided to give BitLocker Drive Encryption a try. It turned out to be pretty straightforward to turn this feature on (using a USB key for key storage as my laptop lacks a TPM) by just going to the Control Panel, Security and the BitLocker Drive Encryption "snap-in":

A few weeks later however I found myself cleaning my whole harddisk, kicking out the Windows XP installation that was still there on another partition and which was barely booted after my Vista Beta 2 installation, and installing build 5472 (which I'm still posting this blog entry in). Switching on BitLocker wasn't so easy this time however, Vista kept complaining about my harddisk partitioning.

So what's the problem? On my beta 2 installation I had a separate (unencrypted) partition with Windows XP and another one with Windows Vista. During installation, the (new) boot loader ended up on the XP partition. When turning on BitLocker, the entire Vista partition is encrypted and the bootloader is able to detect that booting Vista requires the BitLocker key to be loaded (in my case from USB as there is no TPM in the machine to get the key from).

However, on my 5472 installation, I didn't create such a partition and allocated the entire disk for Vista. So, there was no (unencrypted) place left on the harddisk to put the boot loader in and BitLocker refused to work.

Installing Vista with BitLocker in mind

Check out the following page for more information: http://www.microsoft.com/technet/windowsvista/library/c61f2a12-8ae6-4957-b031-97b4d762cf31.mspx#BKMK_S1. It guides you through the diskpart work you have to do prior to setup in order to get BitLocker to work properly. Notice that the Windows Vista setup is fully Windows-based (thanks to Windows PE) and things such as recovery are now fully GUI-based. Vista brings clarity, even to the setup :-). To go short, this is what you should do:

  • Make one primary partition for the Vista installation and assign it drive letter C
  • Shrink that partition with 1.5 GB (wonder why this should be so much)
  • Make a second primary partition on the 1.5 GB of free space and assign it drive letter S
  • Format both partitions as NTFS
  • Install Vista on C

Turning on BitLocker should now be as easy as clicking through a few dialogs and waiting for disk encryption to complete (in the meantime you can just continue to work).

Check out the BitLocker team blog on http://blogs.technet.com/bitlocker/ too. There is some very good news in there on the field of this partitioning need. It appears the team is working on a (re-)partitioning tool to make the system BitLocker ready after installation. Fingers crossed to see the result in a later build...

You might wonder what goes on the S: partition. The answer is the boot loader, which is completely revampes compared to Windows NT <= 5.2. No boot.ini anymore. This is what my S drive looks like:

S:\>dir /a /S
 Volume in drive S has no label.
 Volume Serial Number is 78B8-4F3A

 Directory of S:\

26/07/2006  01:17    <DIR>          Boot
14/07/2006  08:40           432.696 bootmgr
26/07/2006  01:17             8.192 BOOTSECT.BAK
               2 File(s)        440.888 bytes

 Directory of S:\Boot

26/07/2006  01:17    <DIR>          .
26/07/2006  01:17    <DIR>          ..
27/08/2006  23:36            24.576 BCD
27/08/2006  23:36            21.504 BCD.LOG
26/07/2006  01:17                 0 BCD.LOG1
26/07/2006  01:17                 0 BCD.LOG2
14/07/2006  15:25             1.024 bootfix.bin
26/07/2006  01:17            65.536 bootstat.dat
26/07/2006  01:17    <DIR>          en-US
14/07/2006  08:22           219.648 fixfat.exe
14/07/2006  08:22           231.936 fixntfs.exe
26/07/2006  01:17    <DIR>          Fonts
14/07/2006  08:37           381.512 memtest.exe
               9 File(s)        945.736 bytes

 Directory of S:\Boot\en-US

26/07/2006  01:17    <DIR>          .
26/07/2006  01:17    <DIR>          ..
14/07/2006  15:25            61.440 bootmgr.exe.mui
14/07/2006  15:26            35.840 memtest.exe.mui
               2 File(s)         97.280 bytes

 Directory of S:\Boot\Fonts

26/07/2006  01:17    <DIR>          .
26/07/2006  01:17    <DIR>          ..
06/07/2006  17:16         3.694.184 chs_boot.ttf
06/07/2006  17:16         3.876.932 cht_boot.ttf
06/07/2006  17:16         1.984.144 jpn_boot.ttf
06/07/2006  17:16         2.371.272 kor_boot.ttf
06/07/2006  17:16            47.556 wgl4_boot.ttf
               5 File(s)     11.974.088 bytes

     Total Files Listed:
              19 File(s)     13.458.233 bytes
              18 Dir(s)   1.522.487.296 bytes free

A few interesting things are the memtest.exe that can test your RAM memory for problems (which used to be a Microsoft Online Crash Analysis tool in the past, see http://oca.microsoft.com/en/windiag.asp for a free download of it), the fixntfs.exe program (what's in a name?) and the directory structure as a whole. This whole thing listens to the name "Boot Configuration Data Store" or BCD store. More information on the BCD and the bcdedit tool that comes with Vista (as a replacement for the boot.ini-related recovery console tools in the past) can be found on http://www.microsoft.com/technet/windowsvista/library/85cd5efe-c349-427c-b035-c2719d4af778.mspx.

On to Windows Vista RC1. Last week I've downloaded build 5536 which is still pre-RC1 which I intend to install on my second machine. Once the final RC1 build hits the roads, it will become my day-to-day OS on this machine.

Have fun!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Windows Workflow Foundation (WF) is one of the key pillars of the .NET Framework 3.0 (formerly known as WinFX), next to Windows Presentation Foundation (WPF), Windows Communication Foundation (WCF) and Windows CardSpace (WCS). If you haven't done so, download the July CTP now.

A personal opportunity

Now, why am I telling you this? Historically, I've been avoiding BizTalk-related stuff just to be able to concentrate on the basic stuff I'm usually spending time with, such as CLR, C# language features, the BCL, SQL Server, web services, etc. Learning to work with another server product like BizTalk implies yet another learning curve and time was rather limited. But enough boring excuses.

Workflow is more widely accepted than a few years ago and is growing up to become (if it isn't already) a mainstream paradigm in software development. Combine this with the opportunity to write a university thesis on workflow (at UGent) and guess what Bart said to himself a couple of months ago? Indeed, time to enter the world of Windows Workflow Foundation. The research subject is entitled "Dynamic and Generic Workflows with WF", quite a broad definition with a lot of freedom. One of the things that will be researched is how to adapt workflows dynamically under various (stress) conditions. So, stay tuned for additional information on WF-related experiments.

Reading resources

In the meantime I recommend the following book, which was handed out at the PDC 05 last year. It's already quite outdated due to "CTP madness" (well, I should say "evolution", we're talking about technologies which are still under development after all, although the end is near, very near) but it's the only book I know of so far that's publicly available:

Other books that will appear in the next couple of months include:

Hello, Workflow!

Because I want to stress the "getting started" nature of this blog post, let's do a trivial thing. To all WF newbies: welcome to your first WF exposure.

Before you read on, make sure to have downloaded and installed the .NET Framework 3.0, the Windows SDK and the Visual Studio 2005 Extensions for WF. More information can be found on http://msdn.microsoft.com/windowsvista/downloads/products/getthebeta/default.aspx.

Step 1 - A new workflow project

Open Visual Studio 2005 and create a new project. Choose for a Sequential Workflow Console Application:

Let's call it HelloWorkflow to illustrate how big our imagination really is :$. In the solution explorer, delete Workflow1.cs. Although this is not necessary it's interesting to create the workflow in another way, using "code separation" with XOML (which, according to Wikipedia, stands for eXtensible Object Markup Language).

Step 2 - Add a workflow with code separation

Add a new item to the project and choose for Sequential Workflow (with code separation) and call it HelloWorkflow. As you can see, it has the extension .xoml:

Step 3 - Defining the workflow

Workflows consist of activities that are executed in a well-defined order by the workflow runtime engine. These activities can be found in the Visual Studio 2005 toolbox, but it's also possible to create your own activities e.g. by composition (just as you can create a new Windows Forms control by defining a User Control consisting of other "orchestrated" controls). The toolbox looks as follows:

For the sake of the demo, drag and drop a Code(Activity) to the "Drop Activities to create a Sequential Workflow" part in the designer. Using the properties grid, change the name of the new activity from codeActivity1 to sayHello. The result should look like this:

The red exclamation mark tells you there's still something wrong with the activity's configuration. More specifically, the designer tells you "Property 'ExecuteCode' is not set.". Basically, ExecuteCode is an event handler which can be created by double-clicking on the code activity. (Note: for other activities too, always use the red exclamation mark assistance to configure an activity properly). The code for our activity will consist of a simple Console.WriteLine call:

namespace HelloWorkflow
{
   public partial class HelloWorkflow :
SequentialWorkflowActivity
   {
      private void sayHello_ExecuteCode(object sender, EventArgs
e)
      {
         Console.WriteLine("Hello, Workflow!"
);
      }
   }
}

Notice the use of a partial class. The other part of the HelloWorkflow definition lives in the .xoml file. Basically XOML (and XAML too) is just some chunk of XML describing a series of object instantiations, property setting stuff, nesting, etc. Tip: right-click the HelloWorkflow.xoml file in the Solution Explorer, choose Open With... and select the XML Editor:

<SequentialWorkflowActivity x:Class="HelloWorkflow.HelloWorkflow" x:Name="HelloWorkflow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/workflow">
   <
CodeActivity x:Name="sayHello" ExecuteCode="sayHello_ExecuteCode"
/>
</
SequentialWorkflowActivity>

Basically this code comes down to:

namespace HelloWorkflow
{
   public HelloWorkflow : SequentialWorkflowActivity
   {
      public HelloWorkflow()
      {
         CodeActivity sayHello = new CodeActivity();
         sayHello.ExecuteCode += new EventHandler(this.sayHello_ExecuteCode);

         this.Activities.Add(sayHello);
      }
   }
}

Step 4 - Hosting the workflow engine

In order to execute a workflow, one needs to host the workflow engine. However, no worries: Visual Studio 2005 has generated the code for us in Program.cs. It needs a little modification however because we did remove Workflow1.cs and replaced it by HelloWorkflow.cs. Here's the modified Main code (change indicated in bold):

static void Main(string[] args)
{
   using(WorkflowRuntime workflowRuntime = new WorkflowRuntime
())
   {
      AutoResetEvent waitHandle = new AutoResetEvent(false
);
      workflowRuntime.WorkflowCompleted +=
delegate(object sender, WorkflowCompletedEventArgs
e) {waitHandle.Set();};
      workflowRuntime.WorkflowTerminated +=
delegate(object sender, WorkflowTerminatedEventArgs
e)
      {
         Console
.WriteLine(e.Exception.Message);
         waitHandle.Set();
      };

      WorkflowInstance instance = workflowRuntime.CreateWorkflow(typeof(HelloWorkflow
));
      instance.Start();

      waitHandle.WaitOne();
   }
}

This code initializes the WorkflowRuntime and starts the workflow by calling the CreateWorkflow method. The anonymous method plumbing keeps the console application alive till the workflow has executed and also reports exceptions that terminated the workflow.

Step 5 - Run run run

Time to hit CTRL-F5 and to see Windows Workflow Foundation come alive:

What's coming up next?

In a next WF episode I'll tell you more about making dynamic changes to a running workflow. Other topics that will be covered include workflow persistence (dehydration and rehydration of a workflow), combining workflow with web services and combining WCF with WF.

Stay tuned!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

The problem statement

An application is running as a service with the (fictional) identity BACH\svcuser and hosts a .NET Remoting type which is published in SingleCall mode. Users of the client application are logged on to the same domain and are authenticated using their own account (e.g. BACH\bart). Calls on the .NET Remoting service should execute under the client user's identity, not the service's identity.

A solution layout

This post is the ideal opportunity to look at a good solution layout for a .NET Remoting based solution using a "service interface" shared by the server and the client. To establish this layout, create three projects:

  • A class library project, called ServiceType, which contains an interface (IDemoService) describing the service (see further).
  • A console application, called Server, to act as a .NET Remoting application hosting the service that implements the service interface (see further). Add a reference to the ServiceType project and to System.Runtime.Remoting.dll.
  • A console application, called Client, to act as a client application for the .NET Remoting service (see further). Add a reference to the ServiceType project and to System.Runtime.Remoting.dll.

The solution should now look as follows:

Service type

The demo service type interface definition is straightforward:

using System;

namespace
ServiceType
{
   public interface
IDemoService
   {
      string
GetIdentity();
   }
}

Server implementation

On to the server implementation which consists of a console application entrypoint (for the sake of the demo; in reality I'm using a Windows Service) and a service type:

using System;
using
ServiceType;
using
System.Security.Principal;
using
System.Runtime.Remoting.Channels.Tcp;
using
System.Runtime.Remoting.Channels;
using
System.Runtime.Remoting;
using
System.Threading;

namespace
Server
{
   class
Program
   {
      static void Main(string
[] args)
      {
         TcpChannel channel = new TcpChannel
(2468);
         ChannelServices.RegisterChannel(channel, true
);

         RemotingConfiguration.RegisterWellKnownServiceType(typeof(DemoService), "demoservice", WellKnownObjectMode.SingleCall);

         Console.WriteLine("Service running as {0}...", WindowsIdentity
.GetCurrent().Name);
         Console
.ReadLine();
      }
   }

   public class DemoService : MarshalByRefObject,
IDemoService
   {
      public string
GetIdentity()
      {
         WindowsIdentity identity = Thread
.CurrentPrincipal.Identity as WindowsIdentity;
         if
(identity != null && identity.IsAuthenticated)
           
return
identity.Name;
         else
            return null
;
      }
   }
}

The DemoService class is the service type. Therefore it derives from MarshalByRefObject and furthermore we implement the IDemoService interface defined in a separate project. The GetIdentity method implementation is pretty straightforward.

The Program class contains the entry point of the application and registers the service on tcp://localhost:2468/demoservice with the SingleCall activation mode (i.e. an object of type DemoService gets created for each method call and is destroyed afterwards, comparable to a stateless web service).

Client implementation

The client's implementation is also fairly easy for .NET Remoting fans:

using System;
using
ServiceType;
using
System.Runtime.Remoting.Channels.Tcp;
using
System.Runtime.Remoting.Channels;
using
System.Security.Principal;

namespace
Client
{
   class
Program
   {
      static void Main(string
[] args)
      {
         TcpChannel channel = new TcpChannel
();
         ChannelServices.RegisterChannel(channel, true
);

         IDemoService svc = (IDemoService) Activator.GetObject(typeof(IDemoService), "tcp://localhost:2468/demoservice"
);

         Console
.WriteLine("Client running as {0}...", WindowsIdentity.GetCurrent().Name);
         Console.WriteLine("Thread identity on the server: {0}"
, svc.GetIdentity());
         Console
.ReadLine();
      }
   }
}

Testing it

Start the server executable (server.exe) in one console window:

Start the client executable (client.exe) in another console window:

Nothing special so far. Now run the client executable (client.exe) as another user using the runas command runas client.exe /user:test:

That's exactly what we desired.

The trick?

The trick is simple but a bit underdocumented. First of all, since .NET 2.0 the TcpChannel (as well as the HttpChannel) supports SSPI as mentioned on MSDN. Furthermore there is a new RegisterChannel overload on the ChannelServices class that takes a boolean second parameter called "ensureSecurity". By turning this on (on both client and server) SSPI seems to work fine across the wire. Notice the one-parameter RegisterChannel method is marked as deprecated as of .NET 2.0. The documentation is rather simplistic:

If the ensureSecurity parameter is set to true, the remoting system determines whether the channel implements ISecurableChannel, and if so, enables encryption and digital signatures. An exception is thrown if the channel does not implement ISecurableChannel.

But as you can see, setting the flag does the trick.

Taking it one step further

The application I'm working on requires a little more. As the matter in fact, it's a server with two faces. One face is the management face. Its goal is for users to send commands to the server which are then dispatched to multiple other machines (the second face, aka dispatching interface). In order to be eligible to send such a command, the management face requires end-user authentication and authorizes the user. If the user is permitted to send the command, the dispatching face kicks in and dispatches the command to the target machines. This time, the end user's identity should not be forwarded, but the dispatched command should be running as the service user. Looks a little complex? A little example will help:

Assume that the server (SERVER) is running as BACH\svcuser and is waiting for commands to come in through the management face. Now the following happens:

  • Management client PCMGMT runs as BACH\Bart and sends SayHello(new string[] { "PC01", "PC02" }) to SERVER.
  • The server has received the SayHello message on the management face. The thread doing the work runs as BACH\Bart (not BACH\svcuser) thanks to SSPI ("impersonation").
    • BACH\Bart is authorized and is confirmed to be eligible to send the SayHello command.
    • The dispatching face of the server sends a Hello message to PC01 acting as BACH\svcuser (not BACH\Bart).
      • PC01 receives the Hello message on a thread running as BACH\svcuser (similar to PCMGMT-to-SERVER as BACH\Bart but now SERVER-to-PC01 as BACH\svcuser).
    • The dispatching face of the server sends a Hello message to PC02 acting as BACH\svcuser (not BACH\Bart).
      • PC02 receives the Hello message on a thread running as BACH\svcuser (similar to PCMGMT-to-SERVER as BACH\Bart but now SERVER-to-PC02 as BACH\svcuser).

To do this, we can use the class WindowsImpersonationContext as shown below:

using (WindowsImpersonationContext ctx = svcUser.Impersonate())
{
   // Do work acting as the service user
}

In this piece of code the svcUser object is of type WindowsIdentity and refers to the original identity the service was started as. Let's show a more complete example (changes indicated in bold).

Service type

using System;

namespace
ServiceType
{
   public interface
IDemoService
   {
      string
GetIdentity();
      void SomeOperation();
   }
}

Server implementation

using System;
using
ServiceType;
using
System.Security.Principal;
using
System.Runtime.Remoting.Channels.Tcp;
using
System.Runtime.Remoting.Channels;
using
System.Runtime.Remoting;
using
System.Threading;

namespace
Server
{
   class
Program
   {
      public static WindowsIdentity Identity;

      static void Main(string
[] args)
      {
         TcpChannel channel = new TcpChannel
(2468);
         ChannelServices.RegisterChannel(channel, true
);

         RemotingConfiguration.RegisterWellKnownServiceType(typeof(DemoService), "demoservice", WellKnownObjectMode.SingleCall);

         Identity = WindowsIdentity.GetCurrent();

         Console.WriteLine("Service running as {0}...",
Identity.Name);
         Console
.ReadLine();
      }
   }

   public class DemoService : MarshalByRefObject,
IDemoService
   {
      public string
GetIdentity()
      {
         WindowsIdentity identity = Thread
.CurrentPrincipal.Identity as WindowsIdentity;
         if
(identity != null && identity.IsAuthenticated)
           
return
identity.Name;
         else
            return null
;
      }
   }

   public void SomeOperation()
   {
      WindowsIdentity identity = (WindowsIdentity)Thread
.CurrentPrincipal.Identity;

      using (WindowsImpersonationContext
ctx = identity.Impersonate())
      {
         // Here we are impersonating as the management client user
         Console.WriteLine(WindowsIdentity
.GetCurrent().Name);
      }

      using (WindowsImpersonationContext ctx = Program
.Identity.Impersonate())
      {
         // Do work acting as the service user
         Console.WriteLine(WindowsIdentity
.GetCurrent().Name);
      }

      using (WindowsImpersonationContext ctx = identity.Impersonate())
      {
         // Here we are impersonating as the management client user
         Console.WriteLine(WindowsIdentity
.GetCurrent().Name);
      }
   }
}

Client implementation

The client's implementation is also fairly easy for .NET Remoting fans:

using System;
using
ServiceType;
using
System.Runtime.Remoting.Channels.Tcp;
using
System.Runtime.Remoting.Channels;
using
System.Security.Principal;

namespace
Client
{
   class
Program
   {
      static void Main(string
[] args)
      {
         TcpChannel channel = new TcpChannel
();
         ChannelServices.RegisterChannel(channel, true
);

         IDemoService svc = (IDemoService) Activator.GetObject(typeof(IDemoService), "tcp://localhost:2468/demoservice"
);

         Console
.WriteLine("Client running as {0}...", WindowsIdentity.GetCurrent().Name);
         Console.WriteLine("Thread identity on the server: {0}"
, svc.GetIdentity());
         svc.SomeOperation();
         Console
.ReadLine();
      }
   }
}

The result on the server when running the server.exe as VISTA-9400\Bart and the client.exe as VISTA-9400\test:

Happy coding!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Introduction

Last week I had the requirement for a small footprint database on computers to store information in. An XML file was one of the possibilities I was considering but some rich querying capabilities and some relational characteristics in the requirement made me look a little further. Some time ago I blogged about SQL Server 2005 Everywhere Edition, and finally last week I started to use it. A report...

What's in a name?

SQL Server 2005 Everywhere Edition is what it says it is: a version of SQL Server that can run virtually everywhere. Basically, the technology is the evolution of the SQL Server Mobile Edition that you could run on mobile devices, also known as SQL CE. SQL Server 2005 Everywhere Edition goes a little further and can be loaded in-process inside an application as kind of an "invisible relational store" with a very low memory footprint of about 5MB.

A list of interesting features includes:

  • High performance query processor and storage engine
  • Transactional integrity (ACID)
  • 128-bit file-level encryption
  • Low on-disk footprint of 2MB and easy deployment using MSI, ClickOnce or xcopy
  • Remote Data Access (RDA) to synchronize with SQL Server 2005
  • Maximum database size of 4 GB
  • Management of databases via SQL Server 2005 Management Studio
  • Runs on all recent Windows desktop platforms and Windows Mobile devices
  • Easy to use managed API similar to System.Data.SqlClient

You can download the CTP build as well as the Books Online on http://www.microsoft.com/sql/ctp_sqlserver2005everywhereedition.mspx.

Getting started

Time to do some coding in Visual Studio 2005. The first thing to do when working with SQL Server 2005 Everywhere Edition is to add a reference to the managed assembly that comes with the product. It's called System.Data.SqlServerCe.dll and lives in the installation folder (typically %programfiles%\Microsoft SQL Server Everywhere\v3.1). The current CTP build number is 3.0.5235.0. You are free to redist the .dll files of Everywhere Edition with your application (see the EULA file).

Step 1 - Creating a database file

Before we can work with a database, we need to create one of course. This is done programmatically as follows:

using System.Data.SqlServerCe;

//...

string connStr = "Data Source={0};Password={1};";
connStr =
String
.Format(connStr, file, pwd);

using (SqlCeEngine engine = new SqlCeEngine
(connStr))
{
   engine.CreateDatabase();
}

This assumes you have defined two variables: file (e.g. c:\temp\bla.sdf) and pwd (e.g. <whatever you choose>).

Step 2 - Defining the database schema

An easy way to define the database schema (tables etc) for a new (simple?) database is to use the SqlCeCommand class as shown below.

using (SqlCeConnection conn = new SqlCeConnection(connStr))
{
   string sql = "CREATE TABLE Computers (Computer nvarchar (255) NOT NULL PRIMARY KEY, MacAddress nchar (17), DateAdded datetime NOT NULL DEFAULT (getdate()), Approved bit NOT NULL DEFAULT 0)"
;

   SqlCeCommand cmd = new SqlCeCommand
(sql, conn);

   conn.Open();
   cmd.ExecuteNonQuery();
}

Step 3 - Using the database

The database has been defined, time to use it. Nothing special in here: just do the same things as you did with System.Data.SqlClient. Usage of a SQL Server 2005 Everywhere Edition database in .NET is very similar to using big brother System.Data.SqlClient. Commonly used types include SqlCeConnection, SqlCeCommand, SqlCeTransaction, SqlCeDataAdapter, SqlCeDataReader, SqlCeParameter, SqlCeResultSet. All should look more or less familiar if you've been working with the SqlClient in the past.

How to store the password?

One additional difficulty might be storage of the password. The best solution would be not to store it and ask it to the user. If you can, you should choose this solution. However, in the context of a Windows Service that's not possible to do (and if it would, it would be a showstopper to boot the service requiring a password every time). An empty password or a hardcoded password are no options either; data theft would be made very easy if that was the case. So, a random password seems to be the best option. Upon creation of the database, a random password is generated as follows:

private string GeneratePassword(int n)
{
   StringBuilder sb = new StringBuilder
();
   RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider
();
   byte[] b = new byte
[4];
   for (int
j = 0; j < n; j++)
   {
      rng.GetBytes(b);
      int i = Math.Abs(BitConverter
.ToInt32(b, 0));
      sb.Append(pwdChars[i % pwdChars.Length]);
   }
   return
sb.ToString();
}

In this code, pwdChars refers to an array containing valid password characters. You should certainly include all alphanumeric ones but you need to be a little careful with some other characters such as ; in order to be able to use it in the connection string. Notice the usage of the RNGCryptoServiceProvider (instead of System.Random) and the StringBuilder (instead of string concatenation, to avoid having multiple partial copies of the password in memory). Usage of SecureString would be a better option, but SQL Server 2005 Everywhere Edition can't deal with it.

One last question that remains is how to store the password. The answer can be the registry but an additional encryption using DPAPI is highly recommended. This is done using the ProtectedData class. To deal with strings, we're going to work with base64 encoding as shown below:

private string applicationEntropy = "Supply some entropy here. A famous quote perhaps?";

private string
Encrypt(string
plain)
{
   byte[] bPlain = Encoding
.UTF8.GetBytes(plain);
   byte[] bEntropy = Encoding
.UTF8.GetBytes(entropy);

   byte[] bCipher = ProtectedData.Protect(bPlain, bEntropy, DataProtectionScope
.LocalMachine);

   return
Convert.ToBase64String(bCipher);
}

private string Decrypt(string
cipher)
{
   byte[] bCipher = Convert
.FromBase64String(cipher);
   byte[] bEntropy = Encoding
.UTF8.GetBytes(entropy);

   byte[] bPlain = ProtectedData.Unprotect(bCipher, bEntropy, DataProtectionScope
.LocalMachine);

   return Encoding
.UTF8.GetString(bPlain);
}

I guess that storing something in the registry shouldn't be much of a problem (but: use a low-privileged service account and store the encrypted password in the HKCU hive of the registry).

Manage the database

Managing the database can be done using the SQL Server 2005 Management Studio by connecting to a SQL Mobile type of database as shown below:

General management actions such as setting the password, shrinking and compacting the database and performing a repair operation can be done using the database's properties dialog:

Furthermore you can execute queries against the database, add or modify tables and other objects, etc just as you can do with regular SQL Server 2005 databases.

Happy coding!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

More Posts Next page »