Friday, September 12, 2008

How to make a file read in Windows not become a write

A little known, and even less used, feature of all Windows versions from XP and forward is that they support a property called 'Last Access' on all files. On the surface, this seems neat, if not so useful. You can see whenever a file was last accessed using this property.

But think about it. What does this mean? It means that every time you open a file for reading, Windows needs to write something somewhere on the disk! If you're in the process of enumerating, lets say 500 000 files, this is equal to slow! Does anyone ever use that property? Not that I know of.

I'm working with file based persistent storage in my solutions, not with a database, so file access is pretty important to me. By disabling this 'feature', I speeded up enumerating the file system by about a factor of 10! Generally speaking, you'll speed up any system with many file accesses by turning this feature off.

It's really simple too. At a DOS-prompt write:

fsutil behavior set disablelastaccess 1 

When you're at it, you might also want to do:

fsutil behavior set disable8dot3 1 

This last command disables generation of 8-dot-3 legacy file names, effectively halfing the size of directories in NTFS, which must be a good thing. Beware that there might be 16-bit software out there which actually need those 8-dot-3 names to find your files...

Thursday, February 7, 2008

Book Review: Microsoft Windows Internals, Fourth Edition

Microsoft Windows Internals, Fourth Edition, by Mark E. Russinovich and David A. Salomon, Microsoft Press, LOCCN 2004115221

Many years ago, before the release of NT 3.1, I read a book entitled "Inside Windows NT" by Helen Custer. It was a great book, basically a text-book on operating system theory - as exemplified by Windows NT. It covered the theory of how to implement an operating system kernel, showing how it was done in Windows NT. It did not talk about API's so much as about the data structures and logic behind the scenes and the theory of the basic functions of an operating system such as memory mamangement and the IO system.

As I'm now getting back into some heavy-duty C++ coding for the Windows environment, I thought this might be a good refresher for me to (re-)learn about internal structures and enable me to find the right places to implement the functionality I need.

With these expectations I was a bit disappointed by "Windows Internals, Fourth Edition". It's a very different kind of book compared to the original first edition - in fact it's not the fourth edition of "Inside Windows NT" - it's really the second or third edition of "Windows Internals". So, what kind of book is it then?

"Windows Internals" is a cross between a troubleshooting manual for very advanced system managers, a hackers memoirs, an applied users guide to sysinternals utilities and the documentation Microsoft didn't produce for Windows.

It's almost like an independent black-box investigators' report of findings after many years of peering into the internals of Windows - from the outside. Instead of describing how Windows is designed from the designers point of view, it describes a process of external discovery based on reverse-engineering and observation. Instead of just describing how it works, the book focuses on "experiments" whereby with the help of a bunch of very nifty utilities from sysinternals you can "see" how it works.

I find the approach a little strange, I was expecting a more authoritative text, not an experimental guide to 'discovery'. I don't think one should use experimental approaches to learning about a piece of commercial software. Software is an engineering practice - and it should be described, not discovered. It should not be a research project to find out how Windows works - it should be be a matter of reading documentation and backgrounders, which was what I was hoping for when purchasing the book.

Having read all 870 pages, what did I learn? I learnt that sysinternals (http://technet.microsoft.com/en-us/sysinternals/default.aspx) has some very cool utilities (which I already knew), and I learnt a bit about how they do what they do, and how to use them to inspect the state of a Windows system for troubleshooting purposes. As such, it should really be labelled "The essential sysinternals companion", because that's what it really is. It shows you a zillion ways to use the utilities for troubleshooting. Which is all well and good as it goes and very useful in itself.

To summarize, this is not really the book to read if you want to get an authoritative reference about the Windows operating system, although you will learn quite a bit along the way - after all, there is quite a bit of information here. If you're a system manager and/or facing extremely complicated troubleshooting scenarios, then this book is indeed for you. Also, if you're a more practical-minded person, and just want to discover the 'secrets' of Windows, you'll find all the tools here. I would have preferred that Microsoft documented things, instead of leaving it for 'discovery' (and then hiring the people doing the discovering if they're to good at it, and then make them write a book about - which is what happend here).

Tuesday, January 8, 2008

Lock object sharing with hashes

In web applications we frequently need to serialize access to resources in concurrent applications, since ASP.NET is inherently concurrent. A typical scenario is that we have several loosely connected objects that all apply to the same user, and we need to ensure single-threaded access during a read-read or read-modify-write cycle to get a consistent view or update.
A user is usually represented via some sort of string identifier, perhaps an e-mail address. In C# what we what to do is something like:
lock (user) {     // Do something that needs single-thread access }
The problem is what are we to use as a lock object? C# can use any object as a lock, but which one to pick? We must ensure that multiple threads will always get the right object instance, regardless of when in the application life-time the need arises, so in effect these objects must live for the life of the application. This can lead to massive memory consumption, assume a system with one million users - after a while we'll have to keep one million objects around probably in hash table indexed by the e-mail string. That can mean some serious memory problems.

One approach would be to clean up the table when no-one is using a specific lock object, but this is complicated and fraught with it's own threading problems.

After a few false starts, I came up with the following scheme which has now been tested in the wild and been found quite effective as a trade-off between memory and lock contention.

In actual fact, there are usually a rather limited number of possible concurrent actions, limited basically by the number of threads that are active. This number is typically 100 per processor in ASP.NET, and in most applications even with many users the number of actual concurrent requests at any given time is even fewer. So, assuming a 100 concurrent threads, and assuming that they will only acquire one user lock (our example here) at a time, we really only need at most 100 lock objects - not a million. But how to do this?

The algorithm I've come up with is probably not new, but I've not seen it before in the literature nor when actively searching on the web, so it's at least a bit unusual. Here's how it works:

1. Allocate an array with a fixed number of elements, perhaps twice the number of estimated concurrent accesses.
2. Fill the array with objects to be used as locks.
3. To acquire a lock for a given user, generate an index into the lock object array by taking a hash of the user identifier typically with the GetHashCode() method and then take that module the number of lock objects. This is the index into the lock table, use the indexed object and lock.

At best, you'll get a free lock and acquire the lock.

At second best, another thread is actually holding the lock for the same user, and your thread is put on hold as it should be.

At worst, another thread is actually holding the lock but for a different user that happens to use the same lock when calculating the index via the hash modulo the lock array size. By having good hash algorithms and an appropriate number of locks in relation to the number of concurrent accesses, this should be a very infrequent occurrence. But even if it happens, nothing bad happens except that your thread will have to wait a little longer than was absolutely necessary.

This simple algorithm will require a fixed number of locks in relation to the level of concurrency instead of the number of potential objects that require locking, and at a very low cost. Sample code follows:
public class LockObjects {     private object[] _lockObjects;     public LockObjects(int numberOfLockObjects)     {         _lockObjects = GetInitialLockObjects(numberOfLockObjects);     }     public LockObjects()         : this(20)     {     }     private object[] GetInitialLockObjects(int numberOfLockObjects)     {         object[] lockObjects = new object[numberOfLockObjects];         for (int i = 0; i < lockObjects.Length; ++i)         {             lockObjects[i] = new object();         }         return lockObjects;     }     public virtual object GetLockObject(params string[] keys)     {         int lockHash = 0;         foreach (string key in keys)         {             lockHash += key.ToLowerInvariant().GetHashCode();         }         lockHash = Math.Abs(lockHash) % _lockObjects.Length;         return _lockObjects[lockHash];     } }