Tuesday, August 23, 2011

Why you should install programs to the default location

AxCrypt since version 1.7 does not have an option for the user to select installation directory during the installation process.

Some like to change the installation directory, typically to D: or E:, instead of the standard location typically on C: and on English versions of Windows in C:\Program Files\ or C:\Program Files (x86) . This is no longer directly possible from the installation graphical user interface of AxCrypt, and sometimes I get asked why.

The main reason is to avoid trouble, and minimize user options where I as a developer believe I can make a better informed choice. AxCrypt is built around many such decisions based on that premise, we choose the algorithms to use instead of providing you with a bewildering array of choices for example. This is simply because I as an encryption expert believe that I can make this choice better in at least 99,9% of the cases and thus spare all those users a strange question they don't really know how to or even want to answer.

With several millions of installations of AxCrypt, just about anything that can possibly go wrong has at least once or twice. More than twice I've had to help users with trouble caused by not understanding the interaction between Windows, the registry, fixed, removable and network drives and AxCrypt installation. AxCrypt has been installed to network drives, on remote VPN-mounted drives, on USB drives on CD's and just about anything you can imagine. Often it works, but sometimes it does not.

With AxCrypt 1.7 and the upgrade to use Windows Installer technology, a major motivation was increased robustness. Part of this is to minimize the risk of a user mistakenly making a bad choice, and the safest and easiest way to do this is to make the choice automatically. Thus the option to select installation a directory was removed from the installer graphical user interface. It's still there - but you need to know a bit about Window Installer in order to force it to do your bidding. The idea here is that any user skilled and knowledgeable enough to do this, is also skilled enough to make that decision with small or no risk of mistake. It's also very clear that if something does go wrong, it's something that needs to be fixed by the user and it does not wind up as an error report about AxCrypt.

The thing that cinched the decision to remove the installation directory was that I could not, try as I might, think of a single valid functional reason for changing it from the system default! Aesthetical,  arguably yes. Functional, no. AxCrypt is tiny, has no performance impact on the drive it is installed to, and does not produce any growing data there. When we do change the installation directory, we also break some assumptions that are made by other software. We are also now responsible to ensure the right file system permissions for example, we might open a vector for a malware infection by installing to a directory that allows non-administrative rights to write to for example. Please note that AxCrypt, due to Windows design limitations, requires administrator elevation to install anyway. Other assumptions we break are the locations of 32-bit vs. 64-bit software in the various virtualized environments offered.

So, we wind up with a situation where I can find no situation where it's bad to install to the system default location, but several where it's bad to install to a different location. By making the intstallation easier for the user by removing one decision, we also make it safer and more robust. It's an easy call I think.

Finally, if you can provide me with a valid functional reason for not installing AxCrypt to the system default location, please do so and I will try to accomodate that reason in the best way I can think of.

Friday, August 19, 2011

Concerning false positive reports about AxCrypt from antivirus software

From time to time I get user reports about warnings from antivirus software concerning either the installer or one or more of the components of AxCrypt.

This causes great trouble both for me and the user. The user often winds up with an inoperable software, and I get a lot of extra work defending myself against unfounded allegations by software companies that take no responsibility whatsoever. They will not guarantee anything about the 'security' they provide, and they will not in any way assume responsibility for harm caused by flagging clean software falsely as malicious. In a normal legal context this would be called slander, and be cause for legal action.

Some facts about AxCrypt and AxCrypt distributions. AxCrypt is always built completely from source, we do not statically or dynamically link to any third party code except those libraries that are part of the Visual Studio development environment and which come directly from Microsoft.

Distributions are not built in a developer PC, they are built in a special purpose build server which only do that. No other software is installed than that required to build our various softwares there. This server is stationed behind double firewalls, and is never used for any general purpose.

As part of the automated build process, each executable is digitally signed with an authenticode certificate, issued to 'Axantum Software AB'. The issuer of this certificate do certify that such an entity exists, and that it is in good standing. I have provided them with proofs of the companys registration etc. This signing process then ensures that any bits distributed with that signature is traceable back to me and my company, and we would thus potentially be legally accountable for any malware intentionally placed there.

To sum it up: There is no infection in a distribution from me which is digitally signed with my authenticode certificate in the name 'Axantum Software AB'.

It is a continuing effort trying to defend oneself as an independent developer against the so-called anti-virus companies unfounded allegations.

It is beyond belief that a serious anti-virus vendor still in 2011 will flag a properly digitally signed executable as malicious.

If I had the financial resources I would take strong legal action, since this causes sometimes hard or impossible to repair harm to my good standing, and that of my programs.

Please check that you have the properly digitally signed versions of both the installer and the executable components if you are in doubt, instructions on how to do this are found here.

Please help the community by reporting your findings as a false positive to your anti-virus vendor. Although the vendors empathically deny this, they do share signatures (or 'borrow' from each other). This is clearly evidenced by the fact that these false-positive situations usually come in swarms where I get a few reports first from one vendor, and then most of the other vendors follow suit. That can't be a coincidence...

Monday, September 20, 2010

About the ASP.NET Padding Oracle Attack

About the Padding Oracle Attack


You may have read about the Padding Oracle Attack, risking exposure of sensitive information in millions of ASP.NET sites.

This site is not one of the them in any real sense, and never was.

The ASP.NET Padding Oracle Attack exploits a vulnerability published as early as 2002 by Serge Vaudenay in a paper entitled "Security Flaws Induced by CBC Padding Applications to SSL, IPSEC, WTLS...". As usual it's amazing how long time it takes for these things to come to the attention of the large vendors, such as Microsoft.

This attack is in no way specific to ASP.NET - just about every major web platform is likely to be potentially vulnerable. For the technical details, please read the paper by Vaudenay as well as more recent paper entitled "Practical Padding Oracle Attacks" by Juliano Rizzo and Thai Duong. Here I'll just try to explain the factors that cause the vulnerability, and what the consequences may be as well as to describe why this site never was vulnerable in any real sense.

Padding is used in a block cipher to make clear text about to be encrypted an even multiple of the block length. In other words, if the encryption algorithm is designed such that it encrypts 16 bytes at a time, and your clear text is not a multiple of 16 bytes long, we need to add a few dummy bytes at the end to make it an even multple of 16 in this example. These 'dummy bytes' are called padding.

Most encryption schemes use padding that follows a pattern so that the decryption logic can recognize and remove them. Since such a padding scheme is self-verifying, the decryption program can determine if the padding is correct or not - and also give a specific error if the padding is wrong.

An attack requires access to an application that uses a block encryption cipher and actually knows the decryption key, and which an attacker can 'ask' if a given encrypted text contains a padding error or not.

The idea is to send in encrypted text to the application, and then determine if it specifically has a padding error after decryption or not. Obviously, if an attacker sends in bad encrypted text, an error is likely to occurr, but the attack requires that an attacker can distinguish the very specific error 'padding error' from other errors reported.

What's a Padding Oracle?


There are basically two ways an attacker can determine if a padding error has occurred as the result of the the manipulated encrypted text: The easy way is if the application actually says exactly this. With ASP.NET you can for example get the quite clear message "CryptographicException: Padding is invalid and cannot be removed". It does not get any clearer. The harder way is if the application shows different timing characteristics between reporting this error and other possible errors. This is a much harder attack, and much more likely to take significantly longer time since the timing is determined by many other factors as well that are likely to be unknown and uncontrollable by the attacker.

The way to defend against the attack is then to A) ensure that no specific message or error code is returned when a padding error occurs, and B) ensure that timing cannot be used by an attacker as an indirect distinguisher.

A Padding Oracle is something we can ask a question about a given encrypted text, and receive an answer stating either 'Yes, the padding is correct' or 'No, the padding is incorrect'. The trick is to ensure our application is not a Padding Oracle!

The consequences of an attack and why it's so serious for ASP.NET

What's the worst that can happen? Well, anything that is protected by the encryption key used to encrypt the data that the attacker is potentially vulnerable to both inspection and undetected modification.

In the case of ASP.NET, this usually means that the 'machine key' is vulnerable. This is the ASP.NET machine key used to encrypted ViewState and cookies etc, it's not the Windows machine key. In the case of this site, we generate a new key every time the site is started, so even a successful attack has very short time of validity.

Gaining access to the ASP.NET machine key typically means being able to impersonate a logged on user, and possibly gain access to files and other information available to that logged on user. In the case of ASP.NET 3.5 SP 1 and later, it means being able to access all files accessible to the web application via a virtual path. In actual practice, the attack is practical with only a few thousand tries on a typical web site.

The problem with ASP.NET is that a security researcher found a pretty much universal 'Padding Oracle' that is almost entirely independent of the application in question. It uses the 'WebResource.axd' handler as an attack vector. This handler seems to have the bad taste to respond 404 Not Found when the coded resource has correct padding, but is wrong - and 500 Server Error when the coded resource has incorrect padding. There's your padding oracle.

This is pretty bad, so we certainly should take this seriously.

The status for www.axantum.com


The Xecrets on-line password storage has never been vulnerable to this attack for the simple reason that we don't know the encryption key users use, so there's no possibility that our application can be used as a padding oracle for the purpose of breaching the Xecrets password encryption.

However, the Xecrets site as such does use ASP.NET and can theoretically be used as a padding oracle in the sense that it if it should fall to such an attack it would be possible to act as an administrator of the application (not the system). This will still not enable anyone to access stored Xecrets, because the system does not know the encryption key for those files. There is no sensitive information available that is protected by the ASP.NET machine key. It could in theory enable someone to get free access to the Xecrets service though!

Also, becase we create a new machine key every time we restart or recycle the application, even a successful attack would only be valid for a rather short time. Then again, there are rumours that a followup to the attack could lead to code injection.

The Xecrets site uses custom handling of both server errors and not found errors, but it's still probable that it was vulnerable to the WebResource.axd attack. The Xecrets site has from start employed a number of strategies to give aways as little information as possible and reasonable in the face of errors, and has thus always conformed to the first criteria to avoid vulnerability - it returns the same message and page regardless of what kind of error manipulated encrypted text sent to the site causes.

The problem here is that Microsoft has once again failed to follow that maxim, and also failed to follow general good cryptology practices and confused encryption with authentication. Encrypted data should always be verified for authenticity before use, for examle by employing a Message Authentication Code, or a digital signature. All encryption from Axantum uses the well-known 'Encrypt-then-HMAC' or other mechanisms to ensure the authenticity of encrypted data. If ASP.NET had done the same, this would never have happened.

Once again it is shown that following established security and encryption practices will mitigate the situation even in the face of future attacks, impossible to know at the original time of construction. It is also shown that even today, it will take up to 8 years(!) for billion dollar companies to react to a published threat affecting some of the worlds most widely deployed platforms.

As of today, the Xecrets site is also updated to avoid even the ASP.NET Padding Oracle attack via WebResource.axd - or any other similar vector for that matter.

Saturday, December 5, 2009

Password Expiration is a Meaningless Ritual

There are many examples throughout history where a once meaningful rule over time outlives it's original usefulness and becomes meaningless ritual. Password changing policies in a modern network of independent computer like a typical corporate network is such an example today.

A password changing policy is that annoyance that you are faced with every 3, 6 or 12 months typically when you get a notification stating that your password is about to expire and you have to change it.

Now, why is that a meaningless ritual? Because the original justification no longer applies. This practice originated in a system of time shared central computers, your IBM mainframe or VAX/VMS mini(!) computer. You connected to this central beast using a fairly dumb synchronous or asynchronous terminal. The distintive feature of these terminals were that they did not load and exectute arbitrary code. They just displayed information as it was sent to them. To gain access to a system or an application, you started this application on the central computer, and it then asked you for your credentials, i.e. user name and password. If it was ok, it let you access the system. It was all fairly similar to the DOS-box we have in todays Windows computers.

In these days of central IT departments and limited number of terminals, it was common practice that if you went for a vacation, or had to take sick leave, you'd let your collegue use your login information to help complete the tasks that needed completing. This of course led to a situation over time where you essentially lost control over who actually had access to your logon credentials and could use your account. So, to minimize the effect of this, IT departments invented password aging and expiration, forcing you to change it every now and then. This actually had an effect, because if someone with bad intentions actually had gained access to your password, it now become worthless (unless of course, you do as most people did then and still do - use a consistent theme for your passwords, since you can't be bothered to invent and remember a really new one every time).

So, back to why this practice now is a meaningless ritual. Because the password no longer is limited to giving access is to a central system via a non-programmable terminal. Today, the password typically gives you the right to install and execute arbitrary code in the actual computer used to access the systems in question! Anyone with any kind of security training knows that if someone once has had access to a computer with enough privileges to run and install software, that computer is forever potentially compromised until it is reinstalled from original operating system media.

Does changing the password acutally enable you to regain control over your system? Is that the recommended practice if you've had a virus or other malware in your computer? Change the password? Of course not! It's a meaningless gesture changing nothing. Your system will remain potentially compromised until you reinstall the original software from scratch.

So, if you're and IT department manager why would you want to implement a password expiration policy? The only reason I can think of is because it feels good, and because it's way we've always done it. It doesn't actually improve the security of your network one single bit. Not at all. It does annoy the users, and gives you a certain sense of power of course! That's always something.

What should you do instead, provided you're constrained to passords?


  • Set up a password complexity policy that is tough enough that a dictionary attack is unlikely to succeed. Go for length rather than require special characters etc. 15 characters or more is probably a good idea.
  • Set up a password change policy to the effect that the password never expires and cannot be changed by the user - yes, the opposite of what is probably the most common policy today!
  • The best is to generate passwords for your users - yes, you select them! Use a password generator that produces passwords that are not just random collections of characters, but rather combinations of characters that are possible to remember. Give the new user the password an a piece of paper, and keep no copy for your self.
  • Explain to the new user that this is the password, it's ok to keep the paper in the wallet for a few weeks until it sticks to memory. In return for this rather tough password complexity, the user will never need to remember another password while employed by this company. That's a fair tradeoff!
  • Also explain that this password may not be re-used at any other location, that it's a breach of company security IT policy to do so. The password is in effect company confidential and privileged information that may not be disclosed to any third party.

Now, if you get into the situation that the password is considered compromised, this will most likely be because of a malware infestation in your corporate network, it's fairly obvious that you both have to clean all the systems and change all the possibly compromised passwords. But only then! And the reverse applies too, if you have a suspicion that the password is compromised, you should consider all systems where this user has logged on as compromised and candidates for reinstallation.

So, let's start to modernize our policies and actually make them mean something instead of going through old and meaningless rituals!

Update your password policies today!

Sunday, March 1, 2009

How not to shuffle a deck of cards with LINQ

I’m an avid reader of MSDN Magazine, and seldom find any errors. However, in Ken Getz's article “The LINQ Enumerable Class, Part 1” in the July 2008 issue, I found a rather glaring error that needs correction. I sent the following text to Ken, but unfortunately never got a response. Hopefully some will see this blog post, and we'll not be seeing the error illustrated here in production code.

The following piece of code intended to solve the classic shuffle problem is very wrong:

Dim rnd As new System.Random()
Dim numbers = Enumerable.Range(1, 100), OrderBy(Function() rnd.Next)


The error will manifest by making some shuffles more or less likely than others. It is not an unbiased shuffle.

The problem lies in the fact that a list of 100 random numbers, independently chosen, are used to produce a random order of the numbers 1 to 100.

If this code is used as a template for a simulation, the results will be skewed, because not all outcomes of the shuffle are equally likely. If the code is used (with appropriate substitution to a strong pseudo random number generator) for gaming software, either the players or the casino will get better odds than expected.

This is rather serious, as code snippets from MSDN Magazine are likely to be used in many applications.

Why is the code wrong?

Because, when shuffling N numbers in random order, there are N! number of possible shuffles. But, when picking N random numbers independently, from a set of M numbers, there are M**N possible outcomes due the possibility of the same number being drawn more than one time.

For there to be a possibility of this resulting in all shuffles being equally likely, M**N must be evenly divisible by N!. But this is not possible because in this particular case M, 2**31-1 or 2,147,483,647, is prime! System.Random.Next() will return a value >= 0 and < Int32.MaxValue, so there are Int32.MaxValue possible outcomes, which is our M in this case.

This is a variation of a classic implementation error of the shuffle algorithm, and I’m afraid that we’ll have to stick with Fisher-Yates shuffle a while longer. Changing the code to use for example Random.NextDouble() does not remove the problem, it just makes it a bit harder to see. As long as the number of possible outcomes of the random number sequence is larger than the number of possible shuffles, the problem is very likely to be there although the proof will differ from case to case.

There are many more subtle pitfalls in doing a proper shuffle, using the modulo function to reduce integer valued random number generator outputs or using multiplication and rounding to scale a floating point valued RNG just being two of the more well-known.

By the way, the actual implementation of System.Random in the .NET Framework is quite questionable in this regard as well. It will not return an unbiased set of random numbers in some of the overloads, and the Random.NextDouble() implementation will in fact only return the same number of possible outcomes as the System.Next(), because it just scales System.Next() with 1.0/Int32.MaxValue.

Friday, September 12, 2008

How to make a file read in Windows not become a write

A little known, and even less used, feature of all Windows versions from XP and forward is that they support a property called 'Last Access' on all files. On the surface, this seems neat, if not so useful. You can see whenever a file was last accessed using this property.

But think about it. What does this mean? It means that every time you open a file for reading, Windows needs to write something somewhere on the disk! If you're in the process of enumerating, lets say 500 000 files, this is equal to slow! Does anyone ever use that property? Not that I know of.

I'm working with file based persistent storage in my solutions, not with a database, so file access is pretty important to me. By disabling this 'feature', I speeded up enumerating the file system by about a factor of 10! Generally speaking, you'll speed up any system with many file accesses by turning this feature off.

It's really simple too. At a DOS-prompt write:

fsutil behavior set disablelastaccess 1 

When you're at it, you might also want to do:

fsutil behavior set disable8dot3 1 

This last command disables generation of 8-dot-3 legacy file names, effectively halfing the size of directories in NTFS, which must be a good thing. Beware that there might be 16-bit software out there which actually need those 8-dot-3 names to find your files...

Thursday, February 7, 2008

Book Review: Microsoft Windows Internals, Fourth Edition

Microsoft Windows Internals, Fourth Edition, by Mark E. Russinovich and David A. Salomon, Microsoft Press, LOCCN 2004115221

Many years ago, before the release of NT 3.1, I read a book entitled "Inside Windows NT" by Helen Custer. It was a great book, basically a text-book on operating system theory - as exemplified by Windows NT. It covered the theory of how to implement an operating system kernel, showing how it was done in Windows NT. It did not talk about API's so much as about the data structures and logic behind the scenes and the theory of the basic functions of an operating system such as memory mamangement and the IO system.

As I'm now getting back into some heavy-duty C++ coding for the Windows environment, I thought this might be a good refresher for me to (re-)learn about internal structures and enable me to find the right places to implement the functionality I need.

With these expectations I was a bit disappointed by "Windows Internals, Fourth Edition". It's a very different kind of book compared to the original first edition - in fact it's not the fourth edition of "Inside Windows NT" - it's really the second or third edition of "Windows Internals". So, what kind of book is it then?

"Windows Internals" is a cross between a troubleshooting manual for very advanced system managers, a hackers memoirs, an applied users guide to sysinternals utilities and the documentation Microsoft didn't produce for Windows.

It's almost like an independent black-box investigators' report of findings after many years of peering into the internals of Windows - from the outside. Instead of describing how Windows is designed from the designers point of view, it describes a process of external discovery based on reverse-engineering and observation. Instead of just describing how it works, the book focuses on "experiments" whereby with the help of a bunch of very nifty utilities from sysinternals you can "see" how it works.

I find the approach a little strange, I was expecting a more authoritative text, not an experimental guide to 'discovery'. I don't think one should use experimental approaches to learning about a piece of commercial software. Software is an engineering practice - and it should be described, not discovered. It should not be a research project to find out how Windows works - it should be be a matter of reading documentation and backgrounders, which was what I was hoping for when purchasing the book.

Having read all 870 pages, what did I learn? I learnt that sysinternals (http://technet.microsoft.com/en-us/sysinternals/default.aspx) has some very cool utilities (which I already knew), and I learnt a bit about how they do what they do, and how to use them to inspect the state of a Windows system for troubleshooting purposes. As such, it should really be labelled "The essential sysinternals companion", because that's what it really is. It shows you a zillion ways to use the utilities for troubleshooting. Which is all well and good as it goes and very useful in itself.

To summarize, this is not really the book to read if you want to get an authoritative reference about the Windows operating system, although you will learn quite a bit along the way - after all, there is quite a bit of information here. If you're a system manager and/or facing extremely complicated troubleshooting scenarios, then this book is indeed for you. Also, if you're a more practical-minded person, and just want to discover the 'secrets' of Windows, you'll find all the tools here. I would have preferred that Microsoft documented things, instead of leaving it for 'discovery' (and then hiring the people doing the discovering if they're to good at it, and then make them write a book about - which is what happend here).