I’ve seen it again and again… a developer wants to access some restricted data over the internet in a client application, but is unwilling to use a per-user login to access the data. As a result, they embed a password into the application. Then they discover that the password is readable via some mechanism once the application is on the client. Developer scratches their head and tries to figure out how to secure the password. Developer gets frustrated as people say ‘that doesn’t work’.
Fundamentally, you are trying to hide a secret in a client application. There is a long history of trying to do this in applications. It forms the basis for pretty much all forms of application protection – and it is fundamentally impossible. If there is everything you need to run an application on a system, it just requires a certain amount of effort to determine the secret. The amount of effort varies, but in general it is a continual fight between the developer and the person trying to determine the secret.
Mechanisms have been developed to try and make the secret ‘something you have’. One of the earlier disk-based methods was to have ‘malformed’ sectors on a floppy drive that needed to be read. These sectors were only ‘malformed’ in that they were laid on the disk in a method that made them difficult to read normally. The sectors that were read became part of the code that was used to execute the application.
The fix to this form of protection was to read the protected content from an original and then putting this data into a new copy of the program, replacing the invalid content with this good data, and then skip/remove the code that performed the read of the drive data into that location.
An extension to this protection was to actually encrypt the code that is loaded from disk, and then decrypting it at execution time – the encryption varied from simple byte-level xor-based to more fancy xor with rotate. Typically this decryption code butted up to the decrypted code (sometimes even overlaying it), preventing you from setting a breakpoint at the first instruction following the decryption code). Solving this problem involved manually decrypting a few bytes (which at the time was a pen-and-paper operation), and then starting the decryption from the subsequent instructions. Sometimes easy, sometimes more difficult. This would typically be used in conjunction with the ‘special’ media to give a dual layer of protection.
Another mechanism was the hardware dongle. An oft-loved feature of expensive software, it typically embedded some data on the dongle that was necessary for the use of the application. Without the dongle, the application was useless. Some even went so far as to corrupt the data created from the application if the dongle was not present – e.g. straight lines would no longer be quite straight following a save-load cycle, making the files deteriorate following the transition (I think Autocad used this method).
The issue with hardware-based mechanisms is that they have a high cost associated with them on a per-unit basis. A quick search revealed a per-unit cost of €25 for low order quantities, which would need to be added into the cost of the application. In other words, this can quite often not be an appropriate for software which has a low price goal.
For any of these mechanisms, if someone obtained only one part of the solution (application without special disk/dongle) then a well written protection would mean that the application was unusable without the second part. Poorly written protections would use perform a simple test against the special item, and not actually make use of any of the underlying data from it. In general, once you have all the items that are needed for the running of the application all that mattered after that was skill and time.
Special media, encryption, dongles, packers, obfuscation, anti-debugging tricks are many of the tools that have been used to secure applications.
What has this got to do with the opening paragraph? Well quite a bit, actually. The developer needs to store some kind of secret in the application. This secret can be anything, but in general it is some form of key to gain access to some form of resource. Nowadays, the application is not going to be shipped with any physical media – after all, this is the 21st century, and the use of physical media is archaic. This tends to rule out special media and dongles from the mix.
This leaves encryption, packers, obfuscation and other anti-debugging tricks. There are some very good solutions out there for the packer/encryption/anti-debugging. A quick Google for ‘executable packer anti-debugging’ yielded some interesting results. It’s a fun and interesting area, where the developer is trying to outwit the villainous cracker. Some of the solutions are commercial – adding to the cost of application development, and reducing the ability to debug the application when deployed in the field. These generally are decisions that need to be made by the developer when deciding how to proceed to protect their application.
You have to do the math on it. If the cost of developing and implementing the protection exceeds the overall value that you have placed on the application then you simply cannot afford to spend time, effort and money on a solution that will cost you more than you will ever make.
The big take from this is that if you have a secret that you want to keep safe, don’t put it in the application – all that will accomplish is to keep it temporarily out of view. The truly wily hacker will be able to get it; and if the secret is a password to one of your important on-line accounts; then you should think of some other way to mediate access to the resource.