So this afternoon I was at a friend’s house trying to get his ubuntu netbook working with a broadband dongle. It just refused to connect, and on failure displayed a notification dialog that basically read ‘did not work’. Once this dialog appears, the only way to reattempt a connection is to unplug and replug the broadband dongle, as ‘networkmanager’ disables the connection when it fails.
There were logs – the fine syslog.log file, which is almost completely useless for diagnosing the failure in the connection – it seems to be telling me that the connection succeeded, but then was immediately disconnected. About as useful as a slap in the face with a wet haddock.
armed with my iPhone I first attempted to ensure that the connection details were correct. The management tool added the settings, so I immediately did not trust them. Google pointed out some options, but every time the connection failed there was another 30+ second delay unplugging, replugging and reentering the PIN (it ignored the pin option in the network manager configuration).
I fired up my laptop running Windows. It installed the management tool, I looked at the settings, shouted at both the Internet and the ubuntu configuration, both of which were telling complete lies about the settings. Here’s a hint for all you mobile broadband providers – make the settings easily findable using google – there is a lot of outdated and completely invalid information out there that makes this an issue.
so, ultimately, a problem that I struggled with for quite a while under ubuntu was solved in less that 30 seconds under windows, and yet another reason why I think that NetworkManager is a thing of satanic horror that makes using computers under Linux a complete pain in the arse. This ‘solution’ is probably the singularly worst example of dumbing down configuration to the point when something goes wrong, it is practically impossible to diagnose or fix the problem.
In this case, I will have to say… progressive disclosure is a good potential solution to complicated user interfaces. The complete excision of all forms of configuration into the magical tool of automagic only works if it works all the time, and as a friend is fond of saying “If you design a system that it cannot fail then the first thing that happens is that it will.”
Securely loading libraries (Linux)
Now that I’ve said loading libraries in Linux is insecure, let’s just cursorily examine how that is…
I require a digitally signed .so. Being a decent sort of chap, I’ve decided to allow it to exist in a foo.so.signature file, alongside the library foo.so. it means that I don’t need to add it to the binary in another section of the .so. This generally complicates signature checking – you need to check the signature of the binary, while excluding the section containing the signature, which could itself be a mechanism for getting code into the system. This can be ameliorated by enforcing a size restriction on the signature section, but have you seen some of the code these days? it’s really fricking small.
the standard mechanism for loading foo.so, is to use the dlopen() call. Once you have completed this call any .init section of the library has been executed. you are pwned.
You need to open() the file, open() the signature. Compare the signature to the content of the file (you can use mmap(MAP_PRIVATE) to ensure that changes to the underlying file do not affect the contents of your memory. Then you re implement dlopen(), alowing it to take either a file descriptor or a raw handle to memory and a size… it’s your call
Feckers, not making linux secure by default… oh, wait, this has existed since before linux…
Security is an ever developing process. the APIs need to evolve with the threats.
Is that a DLL in your pocket…
Shock! Horror! Bug found where Windows applications will open DLLs that are in the current working directory of a process!
Except it’s not a bug. It’s by design, and it’s existed since NT.
Microsoft is being smacked in the head by a required feature of Windows due to the initial weakness of the LoadLibrary call. If you don’t specify a path to the file to load, it uses the standard library search path.
Dear god, you would think that this was news. It is not news, nor has it been since the goddamned operating system shipped. Granted, the issue is severe, but the fact of the matter is if an application is executed using a working directory that isn’t under your control, then what can you do? if there are libraries in the same directory that launched the program that happen to share the name of system libraries then you’re hosed.
Hey, guess what asshole, if you link a linux binary with a search path containing ‘.’, then you get the same problem. It’s just as well that nobody links their binaries with -R. …. eh?
The documentation is blatant in this regard. I’ve known it was a security issue since I first learned of the LoadLibrary call, as any even half decent developer should have known when they started using the damned function.
The rule is simple. Resolve the full path to a library before you load it. Validate that it ‘looks right’ at that point. Then load it.
BTW .init section in .so files – so totally a security hole. You can’t dlopen a file to determine if it’s good without executing the .init code. Game over man, game f**king over!
My .init code does a setenv(“LD_LIBRARY_PATH”, “.” + getenv(“LD_LIBRARY_PATH”)) … now piss off and write secure code for once…
Controller or Mouse?
Following the purchase of a shiny new Xbox360, I transferred all the licenses and state from my old one to this one. This was accomplished using a 16GB USB stick (a feature added in the last xbox update which is great IMHO). Everything seemed to transfer without a hitch and I was up and using the new system almost immediately. I’ve been enjoying the blessed quiet of the new system, and the larger capacity hard drive allows the adding of an almost unlimited number of game images, which speeds up loading significantly.
Then I fired up Halo 3 – before you laugh, I still haven’t finished it. My single player save game was in some crazy state where upon loading I was immediately booted back to the startup screen. I had to restart the game from scratch, which is not pleasant to say the least.
Replaying the game has been a chore – mind you a lot of things that were difficult first time round are significantly simpler this time – it simply is by virtue of the fact that I’m replaying it.
The issue I have is the messy inaccuracy of the controller as a targeting device. I must simply not really be used to it or something, as I find it cumbersome and generally significantly less accurate than the mouse and keyboard options that I use on the PC.
Does anyone have any advice on the topic? Should I just be trying harder, or is there an option where I can use a mouse and keyboard with the 360? Or will I have to simply suck it up and practice until my hands bleed?
Is it just me….
Or did Apple make a colossal blunder releasing the iPhone 4 before it had been completely tested? The more details of issues that are being reported seems to lead me to this conclusion. Of course I have absolutely no evidence that this is the case, merely having a hunch that this may be the issue. There is a good chance that the lost iPhone caused the acceleration of the schedule for it’s release, preventing a lot of the ‘fit and finish’ work that usually comes with apple products.
New toys..
Woohoo! This new toy is awesome 😉
– Posted using BlogPress from my iPad
Virtual PC connection to the host while on the road
This advice is for Microsoft Virtual PC. When you use software like VMWare, it automatically allows the host to connect directly to the client using the virtual interfaces that have been created.
Most of the recommendations with regard to connection to/from the Virtual PC client recommend configuring the connection to share/bridge one of the network connections.
All very good and well when you’re on a network. I regularly use the system when I have no network available – i.e. I’m completely disconnected. Most of the connection sensing code for network adaptors prevent you from using it while it’s not active, plus I don’t like having to configure the connection manually and then reconfigure it when I’ve got a real network.
The simple solution is to add a Microsoft Loopback Adaptor to the host machine, then create a second network interface on the Virtual PC that uses this interface. Manually configure the IP addresses to be on the same private network, making sure that you don’t accidentally configure it to use an IP address range that you may end up using for a VPN connection.
- Add the Network Adaptor: XP, Vista, Windows 7
- Configure the IP address manually. Use a Private Address Range. I chose an IP address of 10.125.1.1 with a netmask of 255.255.255.0 for the host, then chose 10.125.1.2 for the Virtual machine. XP, Vista, Windows 7 – Use the instructions for Vista.
- Shutdown the Virtual Machine, Don’t hibernate as you can’t add the second network interface.
- Edit the properties of the virtual machine (in the Virtual Machines folder). Either Right Click on the Virtual Machine Icon, or use the Settings Option in the menu bar.
- Configure the network to have 2 interfaces, one of which is linked to the ‘Microsoft Loopback Adaptor’
- Boot up the virtual machine, and follow the instructions for manually configuring the IP address of this new network interface.
Direct connections to the IP address of the client virtual machine now work, and you can use it for anything you want.
Following the instructions here, even if they’re confusing, once you add a dword key called ‘*NdisDeviceType’, with a value of 1, you don’t see the connection as an unknown connection; thus enabling sharing and other features in Vista, Win 7.
Programmatically changing environment variables in Windows
It’s not difficult to set environment variable in Windows. System level variables are stored in HKLM/System/CurrentControlSet/Control/Session Manager/Environment. User level variables are stored in HKCU/Environment. They are either REG_SZ or REG_EXPAND_SZ variables. REG_EXPAND_SZ values use other environment variables to get their ultimate value, while REG_SZ values are considered ‘final destination’ variables.
The issue arises when you programmatically change the value and want it reflected in new programs that are launched. You make your changes in the registry, but none of the newly launches applications notice the change. You need to inform all the running applications that the settings have been changed. To do this you send a WM_SETTINGCHANGE message to all the running applications.
The logic is to issue a SendMessage(HWND_BROADCAST, WM_SETTINGCHANGE, 0, (LPARAM)"Environment"). As the meerkat in the advertisement says ‘Seemples’. Unfortunately, I have a couple of applications with badly written message processing loops which don’t defer to DefWndProc if they don’t handle the message, which causes this function to hang.
The more sensible logic is to use a SendMessageTimeout call, which has 2 extra parameters, one of which is a flag and the other is a timeout in milliseconds. The timeout is a maximum per window, which means that if there are 10 windows causing timeouts and you’re issuing it with a 1000 milli-second (1 second) timeout, then you will be stalled for 10 seconds. You have been warned. Most applications should respond in < 100 milli-seconds, and typically there are only a few badly behaved applications.
This brings us to the code. It’s short, and it’s C and it doesn’t do anything fancy at all. Compiled using MinGW as gcc -mwindows settings.c -o settings.exe
#include <windows.h>
int APIENTRY WinMain(HINSTANCE hInstance,
HINSTANCE hPrevInstance,
LPSTR lpCmdLine,
int nCmdShow)
{
DWORD output;
SendMessageTimeout(HWND_BROADCAST, WM_SETTINGCHANGE, 0,
(LPARAM)"Environment", SMTO_BLOCK, 100, &output);
return (0);
}
Set a variable in the registry. Pop up a cmd window and issue a set command and the change is not reflected in the window. Close the window, run the settings program compiled above, then launch another cmd window and it will now reflect the change to the environment you made in the registry.
The message causes Explorer to re-read the environment, which is why newly launched programs see the changes. You are launching your applications from explorer (the start menu, icons on the desktop, the run menu) for the most part.
When will the iPhone support multiple apps?
It looks like all the built-in iPhone apps support ‘instant resume’ – which means that when you swap applications you get exactly where you were previously.
This is the palm ethos – kill the application when switching. It’s pretty efficient. Memory use is reduced because you can reclaim memory from the terminated applications.
On the original Palm platform, there was no memory protection – the processor didn’t support it. When the system migrated to the ARM processor, it was emulating the original m68k processor, but with added features like improved speed and optional hardware specific acceleration.
You were exhorted in the development guides for the palm platform to ensure that when a user returned to your application it was in exactly the same state as when you left it.
The problem seems to be that an awful lot of applications on the Apple platform do not implement this feature. As a result when you use applications you seem to get kicked to the start of your workflow when you restart it, which is really annoying.
Until applications can actually implement the palm ethos, then people will continue to cry for multi tasking.
Honestly, I think there is a place for push/pull based background tasks that would operate on a scheduled basis – that way you could run them all at a burst, consuming only a small amount of power for the entire set of jobs. This is something that is implemented in Windows 7 (see Extending battery life with energy efficient applications). By keeping the overall CPU utilization down energy consumption is kept down.
Scheduled tasks anyone?
Password recovery from open applications
Well I had a minor hiccup today when I decided it was ‘password change day’. I duly went around changing the password on all my systems. Then I got back to work. 10 minutes later I turned to my other system and typed in the password.
… It didn’t work …
I smacked my head and said to myself “D’oh”, I need to use the new password. But I couldn’t remember all of it. All I had was a few characters I could remember and the fact that my mail program was checking the mail every few minutes and still working.
First I got the pid of thunderbird…
~% ps -fe | grep thunder 1000 17509 1 0 13:19 ? 00:00:00 /bin/sh /usr/bin/thunderbird 1000 17521 17509 0 13:19 ? 00:00:00 /bin/sh /usr/lib/thunderbird/run-mozilla.sh /usr/lib/thunderbird/thunderbird-bin 1000 17526 17521 0 13:19 ? 00:00:24 /usr/lib/thunderbird/thunderbird-bin 1000 19101 19006 0 14:09 pts/10 00:00:00 grep thunder
Then I got the address of the heap from the process’ maps
~% grep 'heap' /proc/17526/maps 08d02000-0a9ad000 rw-p 08d02000 00:00 0 [heap]
I compiled up memory_dumper, and ran it against the process and heap addresses listed.
% ./memory_dumper 08d02000 0a46a000 17526 heap
Then I ran strings on the resulting file, looking for the pattern that matched my remembered password
% strings heap | grep t%7 cheat%7Ladel cheat%7Ladel cheat%7Ladel cheat%7Ladel %
4 copies of the password in memory in the program. That is just in-freaking-sane. It should be present in the program only once, and should probably be concealed using some form of obfuscation. Mind you, it has kept the new password in my mind now, so I should be grateful.
And just in case you feel like trying the password listed, don’t. It’s not the real password 😉