September 2004 - Posts

Since a few days, Monad preview 3.0 is downloadable for beta members on http://beta.microsoft.com. I didn't have the time to check it out yet (see my "laptop reinstall adventures") but it seems to have undergone some series of extensions and improvements. I'll try to post about it as soon as I find the time to do so.

Note: This build isn't going to make it onto my production installation anymore :-). I'm not running any betas on my primary installation anymore (maybe I will with beta 2 of Visual Studio 2005?), all this stuff is running in Virtual PCs and Virtual Servers.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Since a few weeks I'm keeping an eye on the accessories site of Dell for a 7200 RPM ATA-6 disk for my Latitude machine. Dell seems to have one in the US (http://accessories.us.dell.com/sna/ProductDetail.aspx?sku=341-0354&c=us&l=en&cs=04&category_id=5704&first=true) but locally, it seems to be unavailable :-(. The major problem is that generic laptop harddisks do not fit in a Dell laptop (the connection type is different). I hope it comes available very soon in Europe as well...Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

I'm since one year the proud owner of a Dell Latitude D800 laptop and have completed my first complete reinstall of the software. Pretty much work in fact (took me 3 days to get it up and running again with all the software I need to do my work). 40 GB disk space, 15 GB used for all my applications (no documents yet), running Windows Server 2003 Standard Edition. I won't give a complete list of what's installed on the system but I just want to give one recommendation: check out the Dell downloads regularly for BIOS updates. I've upgraded the machine 10 minutes ago with BIOS revision A11 and now the system seems to run faster than ever (in fact I did a bunch of driver updates as well). In the System Properties, 599 MHz was displayed previously, now it's displaying the full 1.60 GHz correctly (running Centrino Pentium M processor). This has to do with the SpeedStep technology but due to outdated drivers (and BIOS) it wasn't running on my former Windows Server 2003 installation.

For people wondering why I just don't run Windows XP on my machine (with Windows Server 2003 in a Virtual PC or so), I have quite some reasons why (and yes, there are disadvantages such as the lack of support for Bluetooth and WPA for wireless LAN). First of all, I'm quite busy with Active Directory development. Secondly, I'm an IIS 6 fan. Third, even on Windows Server 2003 I can have Windows XP themes :-). To be honest, since I have MSDN (now 4 years already) I've never been running any client OS on my primary machines (Windows 2000 Server, Windows Server 2003) except for Longhorn on a second partition. Windows XP Professional is running in VPC over here.

Another nice point is that I'm finally not running as an administrator on my machine anymore (on "jefken" I was a member of the administrators of my AD domain bartdesmet.local). Now, I'm just a power user on the system. During installation I've been using a dedicated "install" account (which is a power user) to do the installation/configuration tasks. Now, I'm running as a normal user with some elevated privileges on the system (i.e. to shutdown/reboot the system which is not possible for normal users on a W2K3 server). Thanks to Don (see comments) to point me to this (at the time of writing I was still a power user in order to finish my configuration/installation tasks). Another tip: if you still need more privileges/rights on the system, use the runas command to log in as a power user (or if it's really needed - should be rarely the case - as an administrator). Finally doing what I'm telling people for quite some time (for more info, check out "running with least privileges" in Writing Secure Code 2nd Edition).

Remark: "Sarastro" is the new computername of my laptop (formely called "jefken") and you might see it appear in some screenshots in articles in the future. I like strange names for my machines (in order to avoid naming collisions, which I had with "dotnetdev1" in the past). Sarastro is one of the protagonists of one of my favorite plays of Mozart. You can find out the rest yourself if you'd like to more about it.

Still one difficult step to take: take an image of my harddisk (which I'm telling I'm going to do now for 5 years :-)). I hope I get it done (don't have a floppy drive to boot Ghost) but I think I have some ideas that might work...

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

This summer I got involved in yet another beta program: "Microsoft Shell (MSH), codenamed Monad". In this series of blog posts I'll share you my first impressions of this new technology and try to give an introduction to get you up to speed on this promising stuff. You can find a general overview on the .NET Show (Episode 43) on MSDN (http://msdn.microsoft.com/theshow/Episode043/default.asp) with the architect on the technology, Jeffrey Snover, and one of the PM's, Jim Truher.

So, what is it?

In my own words: Monad is a new shell environment for the Windows platform that helps to simplify the automation and management of the platform. From a technology perspective, it's completely built based on the .NET Framework technology and it's fully extensible by developers. Monad is planned to ship with the Longhorn operating system, according to the .NET Show information.

What's in a name?

Monad looks a weird name, even for a codename, isn't it? However, the name is well-chosen as Jeffrey explains. It refers to the theory of Leibniz concerning the image he had about the world, namely a world that is build on composition. The Monad shell follows the same idea, where the components are so-called "commandlets" and the world is the (management/automation) environment in which the shell is running.

How to get it and how to run it?

Access to this technology is on a limited basic only though the beta program. More information about this program and how to participate can be found in the .NET Show episode I linked above. The MSH shell runs on the Longhorn preview build 4074 (WinHEC) which can be obtained through MSDN Subcriptions. I did not try it on Windows XP/2003 yet and I don't know whether this is possible today in the preview and whether it will be supported later on or not. On my machine, I'm running Windows Server 2003 as the host operating system with Virtual PC 2004 (not supported on the Windows 2003 platform, but it works nevertheless) and Longhorn 4074 inside (with 512 MB of RAM, WinFS.exe process killed to reduce memory usage).

Basic terminology and architure

I mentioned the name "commandlet" a few lines above as being the atomic unit of the Monad system. A commandlet is the smallest unit inside the system that is responsible to do a certain job. Such a commandlet is not a .exe file, nor is it a script or a batch file. In fact, it's just a class written in .NET Framework technology (so you can choose whatever language you like). So, physically, it is an assembly somewhere on the system that is recognized as a commandlet. The class that implements the commandlet is a subclass of the "System.Management.Automation.Cmdlet" base class and uses an attribute to "announce" itself as being a command-let (with extra information if needed). So, a very simple command-let code structure looks as follows:

using System.Management.Automation;

namespace Demo
{
   [Cmdlet("do", "something")]
  
public class MyFirstCommandLet : Cmdlet { ... }
}

The Cmdlet attribute specifies a verb and a noun. This might look linguistic (and indeed it is) but it is the general approach for creating a commandlet (it's kind of a philosophy: every commandlet is doing something - e.g. get - in a certain context - the files on the filesystem).

A first example:

Of course this commandlet won't do anything since it's missing an implementation. What's better than creating the classic "Hello galaxy" sample ("Hello world" is pre-.NET :-)) to illustrate how to do this?

using System.Management.Automation;

namespace Demo
{
   [Cmdlet("do", "something")]
  
public class MyFirstCommandLet : Cmdlet
   {
      public override void ProcessRecord()
      {
         WriteObject("Hello galaxy!");
      }
  
}
}

The trick is to override the ProcessRecord method of the Cmdlet class. In fact, there are 3 overridable methods in there: StartProcessing, ProcessRecord, EndProcessing. When this file is built using csc.exe (version 2.0.31113 required) and registered with Monad using the "registercommand.exe" tool inside the same folder as msh.exe (the shell executable), the assembly will be registered in the MSH shell. It can be pretty helpful to provide a response file for csc.exe to add references to all of the assemblies of Monad (a .rsp file, see earlier post on this on my blog).

When Monad is started now, you can invoke the "do-something" commandlet by typing "do-something" and pressing (big surprise for a shell environment, isn't it?). The result looks even more promising: the shell displays "Hello galaxy!". Behind the scenes, Monad is using reflection to use the commandlet and get information about it (this will become more clear in further samples that take parameters). The mind of a developer should directly start ringing a bell "reflection is slow" as well as "reflection is very powerful and flexible". The basic advantage is that reflection is a runtime-level technology that makes the system very extensible. The issue that it's rather slow isn't really an issue since we're talking about shell environments but when you take a closer look at the architecture, overhead is reduced in the Monad implementation on quite some places (still investigating this myself).

Why is it better than other shells?

Our sample simply isn't better than other shells (yet!). This is in fact what's going on in classic shells today. You type in a command and it's producing string output on the screen. However, the key is that you're now able to have a strongly-typed shell-environment. E.g. when you're asking a list of processes, you'll end up with a collection type containing all the processes, which are System.Diagnostics.Process instances. Therefore, it's possible to call methods on these instances, loop through the collection, make queries of collections based on properties of the elements in the collection, etc. Let me give an example:

  • get-process returns all of the running processes (you can write a similar commandlet to get all of the running services controlled by the SCM) in the format of an array with System.Diagnostics.Process elements; however, calling get-process will just result in string output written to the console window
  • you can also use variables to put results in, e.g. $a = get-process
  • then you can use write-console $a[0] to display the first process in the array (you can also create loops)
  • or you can do this: $b = $a[0] which will store the first process in the $b variable (strongly typed as a Process instance)
  • thus, to make it really cool, you can do this: $b.Kill() to kill that particular process (Kill is a method on the Process class of the .NET FCL)

The result of this sample is:

$a = get-process
$b = $a[0]
$b.Kill()

Let's go even further now and start using pipes:

  • get-process returns an array of Process-es
  • you can pipe this array to another commandlet to make a selection, i.e. "where"; the result is: get-process | where "processname -eq wordpad", where the "processname" is used to compare that property of the objects with the given value (using the -eq operator)
  • or you can introduce code-blocks like this: get-process | &{ foreach ($process in $input) { $process.Kill() } } to kill all processes (don't try this at home :-))

An example is thus:

$a = get-process | where "processname -eq calc" | &{ foreach ($p in $input) { $p.Kill() } }

This will kill all the calc.exe instances on the system. In an analogue way if structures can be defined, arithmetics can be done, etc. The possibilities shown are just first row of water-molecules on the tip of an iceberg :-)

Where are we going?

Some concluding remarks on this first post about Monad:

  • We're definitely moving towards a more "managed world" (.NET Framework based) and the idea that shells are taking this step right now is a good sign.
  • The development of manageable applications will become easier and all of the plumbing about parsing the command-line parameters, talking to the shell is hidden for the developer (who just has to implement interfaces and extend base classes). Therefore I think that the number of classic "console applications" will decrease since I'm using these primarily today for management/configuration tools for my apps.
  • Manageability should be incorporated in custom developments in order to make the application maintainable at runtime. Today several parts make up the complete management pie: WMI, Performance Counters, Event Logs, etc. Tomorrow, this will be extended with automation support. The disadvantage (personal opinion) about this is that we'll end up with yet another task when developing high-quality applications but I think we can expect management technologies to be simplified/merged in the future (and Monad is already a great step forward in simplifying the concept).
  • Working with the Monad shell (from the end-user's perspective) is still boring but the team has promised to add a lot of functionality that is available in classic shell environments today in order to increase the productivity. However, it will remain console-based (I guess) since it's a shell.
  • Monad has the potential to kick out VBScript/WSH for management tasks on a longer term, since these environments suffer from classic script environments (non-typed environment, weak error catching, not object-oriented). I like this idea (or dream) since I'm not a huge scripting fan (you should know that my primary focus still is ASP.NET which has kicked out VBScript web development).
  • Monad is unique and is not just a "*n*x" shell killer application (to some extent it wants to make the shell-part of Windows stronger of course). It's unique because the main messaging between commandlets are objects, not strings and that the whole system is based on the .NET Framework technology and therefore it is completely managed. The philosophy is just completely different compared to what exists today.
  • The target audience for Monad is developers and IT people. For developers, it's piece of cake (just get to learn the API). For IT people it's somewhat more difficult to start using it (that is, start using msh.exe) but it's just a matter of getting to know it (which should not take too long since basic concepts are inherited from today's shells, such as the use of pipes).
  • One of the problems I expect at the user-side is the fact that administrators (who typically have some knowledge about scripting to automate things) will have the feeling that the shell is more powerful at the one hand, but at the other hand they will have the feeling that they are reduced in their flexibility (since they won't be able to create quick-and-dirty-but-working scripts since there's now the additional step of writing the code in some .NET language and compiling it). Therefore I think that scripting and Monad technology will become more side-by-side technologies in first instance (possibly with the idea of scripting the Monad-shell using the older technology).
  • The main message should be that automation is now a unfiied concept that is encapsulated in some framework and that if you follow the rules of that framework, communication between commandlets will be strongly typed and very rock-solid. It's thus a matter of make shell-techology more extensible and reliable in a unified fashion (and in a later stage, Microsoft can add several layers on this low-to-the-ground technology).
  • This certainly will participate in the story of DSI.

To go short: I'm super-excited about this and if there's one word I like than it's "unification + extensibility" (in fact two words :-)). Cheers!

What's next in "Adventures in Monad"?

I'll cover in future posts how to create parametized commandlets, how to create commandlet providers (e.g. to navigate through a tree-based structure like the registry inside a shell, using the "cd"-equivalents), and much more. Stay tuned in!

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Here it is. Not further comments required, just check it out :-) It's not as easy as it looks at first sight...

http://www.asp.net/Forums/ShowPost.aspx?tabindex=1&PostID=691956

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Time for some beta news updates.

MOM 2005 was released recently (together with a new edition called "Workgroup Edition" that can be used for small environments - see http://www.microsoft.com/mom/evaluation/editions/default.mspx for a comparison with the full-sized edition). I don't have much expertise with MOM (only been running/testing it inside a Virtual Server environment last summer) myself but have heard from several people that it's a promising release of the product. By coincidence I got enrolled in the MOM 2005 beta program somewhere in mid-2003 after attending the preconference day about "Manageability" at TechEd Europe 2003 (quite exception for somebody who's concentrating primary on development, but I'd like to get updated on this point). Currently, Microsoft is working on integrating management technology (SMS and MOM) in one suite called "System Center" of which the roadmap was already announched back on TechEd 2003 at the pre-conference. Something to follow up definitely if you're interested in system manageability and patch management (SUS, WUS et al).

There's also the amazing new release of ISA Server 2004 which I'm already running on a mid-sized network (see http://www.microsoft.com/ISAServer/). In fact, a couple of years ago I started to teach myself ISA 2000 using Thomas Shinder's "ISA Server 2000 bible" but never reached the chapters with the real content :-(. With ISA 2004 I was able to set up and configure the system as a proxy/firewall/server publisher in less than one working day (network with 4 servers and about 50 clients) (and the "ISA Server 2004 bible" of Thomas is already ordered :-), hopefully I get beyond the first couple of chapters now). A very good community site on ISA Server 2004 management is www.isaserver.org. The only problem I still have to solve is the publishing of my Exchange 2003 OWA (superb product as well) on the internet which did not work very will at the first attempt (certificates mess and problems publishing the web server directly). Luckily I found a work-around by defining another protocol (Custom HTTP) for TCP port 80 to add the publishing rule manually to the policy without having to use the built-in support for publishing a web server (but I definitely want to sort this out so that I can start using the rich built-in functionality).

The Virtual Server 2005 product is likely to ship one of the upcoming days/weeks since there are several planned launch events announced on the homepage of the product and is one of my favorite (non-directly-developer-related) product releases of 2004 (beside of XP SP2 which has gone through a long and hard beta phase but the result was double-worth the time spended on this massive update). I'm running it on my primary machine since the early beta's as the "big brother of Virtual PC". I've been posting on my blog about my experiences with this and have only had a few difficulties which were solved pretty fast. Nevertheless, I've sent a bunch of product feedback and suggestions to the VS team already and apparently they liked it (special thanks to guntherb to bring me in contact with the guys).

Since a couple of hours I'm on the beta program of VSMT (Virtual Server Migration Toolkit) which is a (series of) tool(s) tool help IT admins to migrate a physical server machine to a virtual machine that can run inside Virtual Server. Pretty cool stuff. If you're interested in the beta (and participating on it) check out http://www.microsoft.com/windowsserversystem/virtualserver/evaluation/vsmtbeta.mspx. I'm actually involved on Virtual Server testing since the early bits and have been active at the ATE (ask the experts) booth @ TechEd 2004 Europe for Virtual Server. I'll publish some information for developers soon on how to programmatically approach Virtual Server in C# (which is pretty simple to do thanks to the comprehensive API). I'm super-curious to take a look at VSMT in the upcoming days/weeks since I only saw the big overview on TechEd 2004 in a session of ADS/VSMT (which I attended in order to be prepared for the ATE-questions on VSMT - and yes, the questions came up already a couple of minutes after the presentation :-)).

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Some time ago I received this question via e-mail from somebody who started on the self-learning process of developing in .NET (and came across the System.Reflection namespace). Via my blog I'd like to share the question/answer with you.

Question (paraphrased and translated): I started to learn the basics of OO-development and the reason to keep attributes private to hide sensitive data and to estabish the goal of encapsulation etc etc. This is why .NET has properties, right? However, when I took a look at the System.Reflection namespace I saw that it's possible to retrieve the private attributes of an object that way. Why is this? Is this is bug?

My summary: Why does reflection break the privacy of private members (not only attributes)?

The answer:

First of all you have to define the word "secret" in more detail. What they mean with "protecting secret data" is to hide fields that should not be changed directly by the users (i.e. other classes) of that particular class. A class should expose only the basic functionality of it to the outside world (compare with a car, you don't need to know the internal kitchen of a car in order to use it). Properties are indeed a way to do this and often the get and set part of it is just one line ("return _privatemember;" and "_privatemember = value;" respectively). However, this is not the only reason why properties were introduced: properties simplify the syntax (compared to the getX(), setX() approach in Java) and you can of course change the get/set on a later point in time to incorporate data validation etc (important from a security perspective: "all input is evil"). To come back to the discussion of "secret". The word "secret" over here thus means "hidden data which is not directly accessible", "data that keeps the internal state of the object" and has little to do with storing secrets (such as passwords which should never stay in memory for a long time). This second meaning of secret is the one that is linked to encryption and data protection and has little to do with OO.

Now, why is private data not competely private? There are several reasons for this but I'll mention only a few (which are most applicable in my opinion):

  • The .NET Framework uses this itself in the definition of the base class for structs (System.ValueType) in order to check the attributes of two structs for equality with the attributes of another struct (pair-based) as a default implementation for "Equals". The same holds for the "GetHashCode" implementation over there. You can see this yourself if you don't believe me, but I'll just say one word "mscorlib.dll". .NET folks know what I mean :-)
  • Tools can use this information at runtime as a debugging aid by introspecting the internal state of an object (e.g. when you write a "test"-class for unit testing that should test another (already compiled) class without affecting the original class).

It's indeed true that people can misuse this as well to mess around in the internal kitchen of the instance but that's just breaking all the laws of "class usage in normal development". As I said before, the private attributes should never contain sensitive data. Using reflection nothing is safe from introspection (all kind of defined structures can be made visible and can be manipulated at runtime). The need to hold secret information inside the attributes of a class instance should raise the warning "possibly something is wrong with my class design and/or architecture". E.g. password-checking capabilities should never retrieve a password and keep it in memory in order to compare it. Instead, hashed values should be compared, or the call should be escalated to another level (e.g. by calling the Win32 API to do impersonation, password checks, etc; or by using System.DirectoryServices to authenticate against AD). These methods are proven to be secure, so don't reinvent the wheel on the field of security (and any other field: if a - checked and approved - working solution for the problem already exists, try to reuse it).

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

This seems to be a problem for quite some people out there in the ASP.NET development galaxy. I've a rather large reply on the forums that covers this issue in deep detail and hopefully it can help others to sort this out: http://www.asp.net/Forums/ShowPost.aspx?tabindex=1&PostID=692588#692612. To go short: ASP.NET does not know of the .doc extension (is not mapped inside IIS to the ASP.NET ISAPI), so you can't protect it directly in there without altering the configuration of the web server (IIS metabase related) or without using tricks (which I'm explaining in the post).Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

The title looks promising but let me raise a warning first: what I'm going to tell you does only cover a little piece of the large MSBUILD-pie. So, what's up? Let me kick off with telling a bit about MSBUILD for the novices on this point. MSBUILD is a technology that is going to ship with future Visual Studio releases to describe the build process (and later on possibly more than just the build process) in a declarative way (i.e. by using XML). The compilation/build process then is not hosted inside Visual Studio anymore (which is today calling the different tools such as csc.exe, al.exe to do the work - or better, calling the functionality that these tools expose through different namespaces and assemblies). This system has a couple of disadvantages: when you have a project (or solution) in VS.NET you can't build it in a smooth way without using the VS.NET tools. Furthermore, the build process today is a closed operation that has little (or no) support for customization.

So, don't look at MSBUILD as a new batch processing system that just describes the steps needed to do a build. It's more than this: it's a whole vision on how to build software (automatically, manually) without needing Visual Studio (but VS will of course have a tight integration with the tools). Other tools can ship in the future that create msbuild files. And, very important, building apps can be done by using the SDK alone instead of having to rely on Visual Studio tools.

Today however, we can only simulate parts of this process by creating our own solutions. I've seen quite some people creating batch files (really, the old good DOS-inherited files with .bat extension ;-)) to automate builds. I've seen others using VB Scripting in WSH and I've seen an approach (which I've tried myself) using System.CodeDom.Compiler (in combination with for example Microsoft.CSharp) which is a bit cleaner.

Let me explain this approach (using CodeDom) first by illustrating it with a small piece of code. We're going to build a C# compiler (actually a compiler-wrapper) using C# itself (you still have to fill in a few remarks but it gives you the basic idea):

using System;
using System.CodeDom.Compiler;
using Microsoft.CSharp;

namespace Demo
{
  class CscCompiler
  {
    [STAThread]
    static void Main(string[] args)
    {
       CSharpCodeProvider csc = new CSharpCodeProvider();
       ICodeCompiler compiler = csc.CreateCompiler();
    
       CompilerParameters p = new CompilerParameters();
       p.OutputAssembly = ...; //replace ... with the name for the output assembly
       p.ReferencedAssemblies.Add("system.dll"); //add other references (see the /r flag of csc.exe)

       p.GenerateExecutable = false; //self-explaining I think

       compiler.CompileAssemblyFromFile(p, ...); //... should point to the input file
    }
  }
}

Pretty simple isn't it? This way, it's possible to create a build system today (the same can be done for VB.NET) and talk to the compiler from within code.

There's yet another possible approach on how to automate build processes from the command-line. That's by using .rsp files (response files) for the C# compiler (not available afaik for VB.NET). The syntax looks as follows:

csc.exe @answers.rsp ...

where the file specified after the @ symbol is the response file. Response files can be used to wrap up all of the needed command-line parameters of the csc.exe compiler for your build process. The system has a default one which eliminates the need to use /r for .NET Framework libraries (of the basic namespaces). If you've ever wondered why csc.exe-ing a file that uses System.Data (just to pick a namespace) doesn't require the /r:System.Data.dll flag, this is the answer: the CSC.rsp file is read from the framework installation folder (%windir%\Microsoft.NET\Framework\<version number>) and this file contains a series of default /r references for the compiler. The way these files are located are a different question (has to do with the location of the assemblies and the reference load mechism/process of the compiler itself). So, go on and build your own rsp files. It can safe you some work to automate build processes outside Visual Studio today (but keep your eyes opened towards the big release of MSBUILD later on).

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Ever wondered how I get the number of my ASP.NET Forums posts on my homepage? The answer is by using screenscraping and the use of regular expressions. Here's the code:

<%@ OutputCache Duration="30" VaryByParam="none" %>
<%@ Control Language="C#" %>
<%@ Import Namespace="System.Text.RegularExpressions" %>
<%@ Import Namespace="System.IO" %>
<%@ Import Namespace="System.Net" %>

<script runat="server">
 private string URL = "
http://www.asp.net/Forums/User/UserProfile.aspx?tabindex=1&UserName=bdesmet";

 public void Page_Load(object sender, System.EventArgs e)
 {
  try
  {
   WebClient clnt = new WebClient();
   Stream s = clnt.OpenRead(URL);
   StreamReader r = new StreamReader(s);
   string res = r.ReadToEnd();
 
   Regex regex = new Regex("contributed to ((.|\n)*?) out of", RegexOptions.IgnoreCase);
   Match oM = regex.Match(res);
 
   lblPosts.Text = oM.Groups[1].ToString().Replace(",","");
  }
  catch
  {
   lblPosts.Text = "unable to retrieve";
  }
 }
</script>

<asp:Label id="lblPosts" runat="server" />

Pretty simple, isn't it? However, don't forget to cache the whole thing (this is the code of an .ascx, so it causes "partial page caching" of the homepage). A try...catch block should appear in teh code as well to incorporate the possible events of "scraped site down" or "scraped site redesigned".

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

More Posts Next page »