Back to F.E.A.R. 1.03 problems

Finished Splinter Cell. Way too much fun to be legal. I decided to return to F.E.A.R. and the 1.03 update. I reinstalled the game, applied the correct patch, and still could not run the game. It took about an hour to find my solution to the problem. From the VU Games Community Forums we have, down at the bottom a link to securom that contains a replacement binary for F.E.A.R. so it now works correctly again. For those of you who care, this is just to get the game to work, you still need to be legal.

and then the new update breaks the application

I download the european version of the patch. Bleugh, another 200MB download. I apply the patch and now the game no longer works as it claims the game DVD is incorrect. Piece of s**t. There’s no way to unpatch the program so I uninstall it to reinstall it. I forgot to have the game manual handy. Hunt the install key.
Could it be that having Splinter Cell and F.E.A.R. installed on the computer at the same time could be causing problems? Or was it an incorrect patch, as it worked before I applied the patch.
Annoying, I’ll just have to stick to playing Splinter Cell until I’m finished it.

220meg download later – incompatible auto-update

I bought FEAR last week. It informed me that there was an update to the game to bring it up to snuff. Took about 25 minutes to download it and started to install. It then displays the following:

This Update is only compatible with the English (United States) version of F.E.A.R. Please use the correct Update for the installed version of the game.

This was the auto-update tool that came with the application. What a waste of my bandwidth!
Aaaaaaargh!

I had a nightmare last night

It started out quite simply. I was with a few friends in an internet café just shooting the breeze when I noticed this perceptual shiver run through all the people there. When I asked what was going on nobody was talking. Finaly I convinced one of my friends to tell me and he informed me that one of the folks from the data retention section of the Gardaí was here to install the recording software for the shop.
This was in foot of the new legislation that had been introduced for the storage of all internet communications for an arbitrary time. Every bit was being recorded just in case it needed to be checked at a later time for terrorist activities.
This nightmare took a strange turn when I examined the data gathering software. It was performing a simple data dump of everything that was passing through. Because of the vast quantity of data, nothing was being done to ensure that it could not be tampered with by anyone should they have access to the data. At a later point one of my friends found himself in court facing a criminal charge of conspiracy to commit murder based on the content of one of the logs that had been recorded.
It’s scary, but it is possible for it to happen. The question beomes how do we ensure the integrity of the data that is in the recording? If you wanted to prevent accidental tampering with the data, then using some form of checksum on individual blocks of data would provide for that, however a malicious tamperer could simply alter the checksum for the given blocks to prevent their detection. Based on the quantity of information being gathered, you could chain the checksums. Initialize the first block to some random piece of information. checksum it. For the next checksum initialize it from the content of the previous checksum. The principle is used in various encryption systems (Cipher Block Chaining). If you wish to tamper with the data in-stream you need to alter the checksum from the point of alteration to the end of the recording.
As simulteneously you have a program continually writing new blocks of information to the storage device, you would need to either (a) insinuate a program that would alter the checksums as they are written to the device, or (b) interfere with the recording program to possess the new checksum just prior to the next write to the device, thereby having it perform the updating for you.
Both techniques are not impossible to perform, in fact the first is downright trivial. The only way of bypassing this sort of tampering is to ensure that the recording device is isolated in some way from the data that it is recording.
For this purpose, it would need to be a specially assembled recording device which possesses two fail-hot network interfaces as it’s only method of communication to the outside world. A fail-hot network interface pair is one that when the power is removed simply keeps the network traffic passing through without interruption.
Secondly it would just record the data, it would have no interpretation capabilities. The reason for this is to remove any chance that it could be subverted through maliciously formed network packets.
The box should be tamper-evident. by having this facility, any efforts to extract the data through physical manipulation of the recording device would be easily noticed, thus rendering the data recorded inadmissable in a court. Tampering with the device would be a criminal offence.
The device would need to be regularly inspected, hot-swapping new devices for old ones so the data recording could carry on uninterrupted.

You don’t need Administrator access for that

I encountered a real dunderhead of a program. It claims to be completely NT, 2K and XP happy, yet it doesn’t tell us it needs administrator access because it creates it’s temp files in C:\, yes, the root of the C drive. There is a perfectly good API available for making good, clean temp files – it’s called GetTempFileName. for a bonus there’s GetTempPath, which gets you a directory for creating temp files, and this directory stands a really good chance of being user isolated (being that it’s %USERPROFILE%\Local Settings\Temp on most NT based OSes). But no, you go and ruin my perfectly working ordinary user program by insisting that you run as administrator. Bloody not written by me sub-programs. You deserve great pain for what you have done.

Hitting a nerve with Web 2.0

In an article by Andrew Keen, he rants somewhat about Web 2.0 being a bit too reflective on ourselves. I find myself agreeing somewhat with the problem of reflection. Being that this is my blog, of course the whole thing is reflexive from the get-go, but bear with me. The web is a dangerously filtering medium. Web 2.0 makes it even more so. Communities of likeminded individuals keeping to their own interests. I’m tempted to make a roulette style take me somewhere new website that is as random as possible. Of course any mechanism I would consider using (search engine based) would of course not be completely random.
I’m getting annoyed with the virtual community building. we need to build real communities.

Assembly language, functions and misoriented parameters

I had a minor complaint some time back about the lack of consistency amongst the various windows APIs, they seemed to be written by people who chose one mechanism one day, and another the other day. The reality is that in such a large company as Microsoft, the different groups were consistent amongst themselves, the problem was that they were not consistent amongst each other.
This brings me to the rant du jour. When one finds oneself reading/writing assembly language, the code is platform consistent. For example on x86 the format is: operator parameters,destination. So, mov 0xffffffff, %eax means put the value 0xffffffff into the register eax. On Sparc the parameter and destinations are reversed so mov %l0,0x110011 means put the value 0x110011 into the register l0. It’s quite easy to see one from the other because you are aware of what platform you are on. Sun, for reasons best known to themselves reversed the order of the parameters – probably to make them more like the Solaris ‘native’ format and easier for their developers to follow.
All very fine and well, Sun are entitled to confuse native x86 developers all they want, and besides which, because of the consistency, it is a really easy switch.
My mini bugbear is of course, the bcopy function. It performs a block copy from a source to a destination. It’s part of libc, it’s simple, it’s easy to use, the only problem is that it is in the reverse order from all the string routines (dest, source), memmove (dest, src). If you look up the definition of bcopy it generally asserts that it is implemented in terms of memmove, and if you look at most implementations you find that bcopy just swaps the in and out parameters and then invokes memmove (I believe the exception is sparc, where this is the opposite). This is why bcopy is officially off my list of functions to use. It’s simply the one lone voice of dissent amongst all the consistency that libc affords.
Who made bcopy then?

journalspace premium services

Dermot has a mini-rant about this on his blog. It turns out that in order to do most anything useful on journalspace you need to pay for it. For example exporting RSS2.0 feeds is a premium service, which makes allowing rss readers access to your site (last updated pings, for example) costs you money. Being able to post from external applications (via XML-RPC) is another premium service, making cross blog synchronization difficult.
I suppose that is the price of using a free service.

The angst – the multi-step ADO operation tale continues

I thought I had it, but I didn’t. I still can’t find out the reason. Currently, after each post I re-read the table by closing and opening it. The table is a small, local, temporary table for recording information before posting it to the real database so I don’t really care that the exception is triggered.
Now, I’m getting DbgBreakPoint exceptions. This was in a simple showmodal call, so I have no idea why it’s there. Apparently it might have something to do with opening the table concerned.