August 2004 - Posts

Some people told me that there is a problem to create the mappings of extensions to the ASP.NET ISAPI in IIS 5.1 on Windows XP (the OK button remains grayed out). This KB article explains this confirmed problem on Windows XP: http://support.microsoft.com/?id=317948.

If you did not read the article yet, take a look at http://www.microsoft.com/belux/nl/msdn/community/columns/desmet/httphandler.mspx. Enjoy :-)

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Do you have the bad habit to put a lot of files on your desktop as well? If this is not the case, it's safe to exit right now :-) For the others (I hope I'm not alone), this "Personal Stupid Tool" can be a welcome relief (or annoyance). I really like to create tools for the most stupid scenario you can think of. Putting files on a desktop is one of these... To address this bad habit I created a simple (I can't repeat it enough: but in one word stupid) tool, called the "BdsSoft Watch My Desktop" tool. It just sits in the system tray, is loaded automatically at user logon time (Start, Programs, Startup) and displays a warning whenever you attempt to store something on the desktop. I'm now running it for a couple of days and it makes me think whenever I'm on my way to press the magic key combination CTRL-S. Hopefully it helps on a longer term to reduce my dirty, silly, etc habits on the field of organizing my system :-)

You can download it from my website on http://www.bartdesmet.net/download/watchmydesktop.zip. (And yes, it's removable in case you find it really annoying).

Technical details: written in C# using Visual Studio .NET 2003.
Magic component: FileSystemWatcher
Lines of code: (not generated, but self-written) 11
Development time: less than 5 minutes
Rate of stupidity: 10

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

The machine where my websites are hosted, hosts a bunch of other sites as well (now on Windows Server 2003 finally :-)). In fact every user has his/her own user account to log in to the server using FPSE or FTP to modify the site. However, when using the default website with virtual directories that match the user's names, this seems to work correctly (the user is entering the right folder after the login) but in fact the users are still able to cd .. out of their directory and then cd into another user's directory (only with "list folder contents" rights fortunately since the security on the folder was set tight during the FPSE configuration for the user's site). Nevertheless, this is still not secure enough for our server so I started to read pages 797 to 809 of the IIS 6.0 Resource Kit which was lying on my shelf for quite some time now :-). The good news is that IIS 6.0 supports a concept called "User Isolation" which allows you to set up the FTP site so that users can't see each other's folders (as a result, each user thinks he/she's alone on the box after a log in since cd .. can't go higher in the hierarchy than the user's root folder). When creating a new FTP site, you can specify the isolation mode:

The last option was my choice (since we have AD on the server's network and of course since AD is my big love for the moment). The basics of this isolation mode are rather easy: the user's root folder is retrieved from Active Directory when the user has logged on to the system. Let's take a look at the process:

  1. user logs in to the server (using ftp.exe for example)
  2. the system connects to the Active Directory domain on the network (which is specified during the creation of the FTP site using inetmgr.exe or using script) to retrieve attributes of the specified user (and to check the credentials of course)
  3. the msIIS-FTPRoot and msIIS-FTPDir properties of the user are retrieved and concatenated to form the path to the folder that needs to be opened (this can be a UNC path as well) - see further for an example
  4. the user is now living in a sandbox and can't leave this folder

By using this technology, it's possible to configure a user to have his folders on a separate machine using a UNC path to another server, just by changing the user's properties in the directory. Furthermore, when you install a new member server with IIS as a front-end webserver, you just need to set up the Active Directory FTP link (by creating a new FTP Site) and off you go.

You can set the user's FTP properties in AD using the iisftp.vbs script (from %windir%\System32):

iisftp.vbs /SetADProp bart FTPRoot D:\Web
iisftp.vbs /SetADProp bart FTPDir \bartdesmet.net

Which will cause the directory to refer to d:\web\bartdesmet.net. If I decide to put the files on another machine, I just need to change the FTPRoot now. Another advantage of this approach is that you can point to whatever path you want for a particular user, so you don't need to have a strict hierarchy of folders (such as ftproot\thedomain\theuser being the FTP folder for the user "thedomain\theuser") as this is the case with the "Isolate users" (without AD) option.

The vbs-script is nice, but System.DirectoryServices is much cuter and this really has become routine for me right now to write code for AD manipulations. First of all, I have a query tool:

using System;
using System.DirectoryServices;

class FtpAdQuery
{
 public static void Main(string[] args)
 {
  if (args.Length == 0)
   return;

  string ldap = args[0];

  DirectoryEntry e = new DirectoryEntry(ldap);

  DirectorySearcher src = new DirectorySearcher(e, "(objectClass=user)");
  SearchResultCollection res = src.FindAll();

  foreach (SearchResult r in res)
  {
   DirectoryEntry f = r.GetDirectoryEntry();
   Console.WriteLine(f.Name + "\t" + f.Properties["msIIS-FTPRoot"].Value + f.Properties["msIIS-FTPDir"].Value);
  }
 }
}

and secondly I have a manipulation tool as well to set the root for every entry in the directory (in fact, I just need to specify the LDAP path to the location in AD where I'm storing the web users, in my case this is a root-level OU called "Web Users"):

using System;
using System.DirectoryServices;

class FtpAd
{
 public static void Main(string[] args)
 {
  if (args.Length == 0)
   return;

  string ldap = args[0];
  string root = (args.Length >= 2 ? args[1] : null);

  DirectoryEntry e = new DirectoryEntry(ldap);

  DirectorySearcher src = new DirectorySearcher(e, "(objectClass=user)");
  SearchResultCollection res = src.FindAll();

  foreach (SearchResult r in res)
  {
   DirectoryEntry f = r.GetDirectoryEntry();
   Console.WriteLine(f.Name);

   if (root != null)
   {
    f.Properties["msIIS-FTPRoot"].Value = root;
    f.CommitChanges();
   }
  }
 }
}

All this code was editor-free and built through notepad.exe and csc.exe (without compilation errors :-)). That was it for today (in fact, two days seem to be merged again due to overnight work) on the field of System.DirectoryServices. More to follow later. My bed is waiting now...

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

While working on design specs, architectural documents, threat modeling, etc for the future development of SchoolServer I'm investigating quite a lot of techologies and their usability for plug-ins on our solution. Examples include ISA 2004 (yesterday I did an installation of it in a production environment for about 50 users as a firewall, proxy and server publisher), Exchange 2003, SFU 3.5, SharePoint object model, IIS 6 management, etc. The good news is that quite a lot of these technologies are (rather) easy to manage from within managed code (i.e. C# in our case, however everything will work with VB.NET as well). My latest big "managed love" (please don't misunderstand :-)) is now called System.DirectoryServices which I'm working with regularly for over a year now. The documentation of these classes is a bit dark and mentions that System.DirectoryServices is a management library for Active Directory. However, it's more than this alone, you can use it to talk to ADSI objects in general including AD/AM, Exchange (which is Active Directory based of course) but things such as the IIS configuration as well.

I'll now provide a link to a first sample of Exchange programming on my website: http://www.bartdesmet.net/download/exchangedemo.zip. Enjoy it :-)

You can expect to see more of these samples/stuff later on, including IIS management samples.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

The download stats on my web server for my "Cassini as a service" sample are hitting the big numbers currently (over 1000 already). If you're interested, you can find more info on http://www.asp.net/Forums/ShowPost.aspx?tabindex=1&PostID=594985. Cassini is a lightweight webserver for ASP.NET that is used by the ASP.NET Web Matrix and now lives further in Visual Studio 2005 (aka Whidbey). By running it as a service you can contact the server even when the user is not logged on (just as this is the case with IIS). Beside of that, the sample has a Windows Forms client (systemtray client) and an installer in it.Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

This can be a useful link for people who're testing RIS and PXE boots inside a Virtual PC/Virtual Server environment: http://roudybob.net/articles/756.aspxDel.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

During the last weekend, the server was migrated (finally!) to Windows Server 2003. It took a while till we finally decided to do the migration (which would affect quite a lot of sites running on the system) but finally we're done with it and the machine is working better than ever... In fact, the previous isntallation was pretty stable as well: 250 days of uptime as you can see in the screenshot that was taken a few minutes before the machine was turned off to do the migration. Are there still folks out there who believe that Microsoft server products are not stable? Time for a revision of that statement then :-)

 

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Since we're upgrading and maintaining our servers this weekend (both on the field of software and hardware), my sites will be down this weekend from Friday morning till Monday evening. See you back then on the blogoscope :-) Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

The picture tells it all (only for Dutch speaking people). Guess what the original word was in the circle?

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

Part 3 - Improving your forms authentication code and database access

I mentioned the use of forms authentication a few times in the previous parts of this FAQ series. In this post I'll cover some best practices to improve your forms authentication even further. Beside of that, I'll cover database best practices as well.

Tip 1: Encrypt but make sure it's encrypted the right way

As I mentioned earlier, stored passwords in the database you're using for the forms authentication on your site should be encrypted. The best way is to apply irreversible encryption by means of a hash (so there is no danger that an attacker can steal the key that was used to do symmetric encryption for example, so that all passwords can be retrieved). However, hashed values can be well-known (for example a hash of the password "password" will always be the same when MD5 is applied to encrypt it). To improve security further, you can take advantage of a technique called "salt". The idea is pretty simple, instead of encrypting the password itself, encrypt a derived string by creation some random "salt" and concatenating this to the password string. There are several possible solutions:

  • The first one is pretty easy: just use a well-known "secret" in your application that defines the salt. For example, always concatenate "SomeJunkData" before the password and then hash it (and store that value in the database). When doing a login, put the salt on the password string again and compare the hashes. The main problem however is where to store this secret again (if an attacker can change the salt string that lives on the web server, new passwords will be stored incorrectly and nobody will be able to log in again since the hashed values will be different. Thus, this approach is not recommended!
  • Every user has his salt string which is stored in the database. This is the best approach and requires an additional field in the database for each user that contains the salt. To create a salt value, use a random generator and create a salt string. The rest of the process is the same as described above, concatenate the salt and the password and hash it. However, there are some things to keep in mind:
    • System.Random is a random generator but this one is not sufficient to get "encryption random values". The better random generator is the System.Security.Cryptography.RNGCryptoServiceProvider class that takes advantage of system entropy to create a really random value. Once you get the random bytes, use encoding to put the byte array into a string value (e.g. base-64 using Convert.ToBase64String(byte[])).
    • Authentication now requires to get an additional value from the database containing the salt. Some further tips on database access will be covered further.

Tip 2: Active Directory authentication using forms

If you've ever connected to an Exchange 2003 OWA website that has forms-authentication enabled, this is going on behind the scenes. Actually, you're presented with a web page that allows you to login to your mailbox using your AD credentials (which is a nice way to login if you're outside the corporate network, if SSL is used of course!). As far as I know, there are two ways to do this:

Tip 3: Secure database handling

A few tips:

  • The first thing to do to secure your database is of course putting the connection string in a secure place (as explained earlier in the context of the aspnet_setreg.exe tool).
  • Avoid SQL Injection attacks by avoiding the usage of string concatenation to build your SQL statements. An example is this: "SELECT * FROM Users WHERE ID=" + id contains a problem. When the id string contains for example: "1;DROP ..." (replace ... with some keyword stuff and a table name for example), the malicious code will be executed. Furthermore, by taking advantage of the "--" commenting characters for SQL Server some parts of a SQL string can be ignored. An example is this: "SELECT * FROM Users WHERE ID=" + id + " AND ..." (replace ... with some other stuff). When id now contains for example "1 --", the rest of the condition will be ignored. To avoid these kind of problems (remember "all input is suspect") use a SqlCommand with parameters.
  • Use stored procedures wherever you can. This is worth the work to avoid security risks and to make performance better. Again, use a SqlCommand with parameters.
  • SQL Injections attacks should not occur but if they (still) occur, make sure the risks are low. What I mean is this: if 90% of the website only reads data from the database, there's no need to have write-rights enabled for the account used. Don't ever use sa as the account to login to the database server! Don't use a edit-enabled account if it's not needed. It's better to have two different connection strings, one for admins and one for generic data readers, than having one string causing a higher risk. However, keep in mind to store the connection strings securely.
  • Make sure you're using the validateRequest attribute on the @Page directive to check against other kinds of suspicious input from users (script injection).

Tip 4: Use SSL whenever possible

When using forms authentication (or basic authentication as well), do everything you can to use SSL to encrypt all traffic between the server and the client when communicating to do the login. You can do this by specifying requireSSL="true" on the <forms> tag in your web.config file. Keep in mind this only requires users to use HTTPS to log in (not for the rest of the site). If you want the whole site to be SSL-requiring, you should alter the configuration on the level of IIS. The installation of the SSL/TLS certificate is an IIS thing as well.

Tip 5: How to work with connections in ADO.NET?

Keep it as easy as possible and don't reinvent the wheel again. Microsoft is clear on this point: open up a connection once you need one and close it asap once you don't need it anymore. Don't keep connections alive (combined with the SqlCommand this can have an unexpected behavior since, depending on the used properties, the connection can be closed when the command completes). Of course, put the "connection close" code in the finally block of a large try...catch structure to avoid that connections keep live in case of an exception. Typical code looks as follows:

SqlConnection conn = new SqlConnection(dsn);
try
{
   conn.Open();
   //use the db
}
catch (Exception ex)
{
   //catch exceptions
}
finally
{
   conn.Close();
}

or you can use the "using" C# feature as well since SqlConnection is IDisposable to make sure the connection is always closed:

using(SqlConnection conn = new SqlConnection(dsn))
{
   //use it
}

Tip 6: What about connection pooling?

Connection pooling is a handy ADO.NET feature that you best start to use to control the number of connections on a SQL Server. The trick is pretty easy: a pool is defined through the DSN (data source name) string. The only thing you should take care of is that the DSN string should be the same throughout the whole application to ensure that the same pool is used (this is done when you're retrieving the DSN string from the web.config file, see earlier for encrypting this). An example is this:

string dsn = "server=localhost;uid=someUser;pwd=S0mePwd!;database=Northwind;Min Pool Size=5;Max Pool Size=10";

This will create a pool of minimum 5 connections and a maximum of 10 concurrent connections to the database server. Using the following dsn as well will create a second pool (!!!):

string dsn = "server=localhost; uid=someUser;pwd=S0mePwd!;database=Northwind;Min Pool Size=5;Max Pool Size=10"; //one additional space somewhere, can you find it?

A handy trick is the usage of the performance monitor on the database server to watch the number of connections that is made when the first request to the app occurs (there will be 5 connections at least).

Tip 7: Impersonation on certain code levels in an application only

Sometimes it's not needed that the whole web application runs in an impersonated context to do its work. This is the most common usage of impersonation and is as easy as putting an <impersonation> tag on the right place of the web.config file (see in an earlier post). However, you can do impersonation in the code as well. First of all, retrieve the WindowsIdentity of the current user:

WindowsIdentity id = (User.Identity as WindowsIdentity); //retrieved from the current HttpContext User object

Then, create an impersonation context for the by calling the Impersonate method on the id object:

WindowsImpersonationContext cntx = id.Impersonate();

All the following code will run under the context of the logged in user now. If you're done, call the Undo method on the context:

cntx.Undo();

Typically, this code is placed in a finally block again to ensure the impersonation stops always.

Del.icio.us | Digg It | Technorati | Blinklist | Furl | reddit | DotNetKicks

More Posts Next page »