sshd 9.8 PerSourcePenalties

New feature dropped in sshd which applies penalties for certain behaviors against the ssh server. The documentation is… questionable, as it doesn’t really explicit about it, and the manual page is a bit unclear.

To turn off the feature entirely, you need to pass in the value no. This is not documented in the manual page at the moment, but is the usual way of disabling a feature so … I should have figured this out already.

It mentions a keyword off, for each of the options, which does not work (it only accepts numbers). To accomplish off, for any individual option you set it to 0s, or at least this is what I surmise.

How it works:

Every time you hit one of the events, your IP is penalized by the quantity of seconds presented for the option in question, so for example there is an option authfail. Every time you fail to authenticate, your IP is applied with that number of seconds as a penalty.

When that amount of penalty time exceeds min, then you have the penalty applied to your IP address.

Penalties applied to IP addresses run-down every second, so if you had a penalty of 5 seconds applied at second 0, then at second 1, your penalty would be 4 seconds, at second 2, your penalty would be 3 seconds, and so on.

The default value for authfail is 5 seconds, and the default value for min is 15 seconds. This means that if you had 3 authentication failures within the same second, you will be unable to connect to the remote system for 15 seconds.

Some of the penalties are very high – e.g. crash, which has a 90s penalty – this means it applies immediately under the default configuration of 15 seconds, and will prevent you connecting for a minimum of 90 seconds after that, which is a good thing in general for well-behaved systems (That don’t crash).

If you’re performing a keyboard interactive login on a system, and are failing to log on, there is a typical 3 second wait before you can type the password in a second time, which would make it incredibly difficult to actually trigger this failure, as it would require far more login attempts on the prompt to fail the login

I am however using these systems with some automation for integration testing, and it’s proving to be a real pain, as I’m encountering the effects of legacy workaround behavior on top of this new functionality. So for now, it’s off, which is a shame!

Voice Transcription

I’ve been noodling around with transcription software, and encountered whisper, which does a great job of transcribing. It’s the usual python-based solution, which is pretty good at knocking out a transcription, but it’s a tad on the slow side.

Then, when listening to the Changelog podcast, they were talking to Georgi Gerganov, who’s created whisper.cpp, which is an implementation in C++. I am completely bowled over with not surprise that it’s faster than the original python implementation.

… but… it’s faster by an order of magnitude. While the original python implementation can be done in hours, this can be done in minutes.

I think I’m going to stick to C++ and rust when I need the performance, thanks!

Dropping files on apps (Mac)

When you write an application, sometimes you want the application to open with one or more files. There’s the ubiquitous double-click to open with the default application, there’s the right-click and use the open with a specific application option.

So on the Mac, to support dragging-and-dropping onto the application icon, you need to declare that the app supports having that file type dropped upon it. So you go to the application file types, and add support for your file type. This will add something like the following to your application plist:

<key>CFBundleDocumentTypes</key>
<array>
	<dict>
		<key>CFBundleTypeName</key>
		<string>All Items</string>
		<key>CFBundleTypeRole</key>
		<string>Viewer</string>
		<key>LSHandlerRank</key>
		<string>Alternate</string>
		<key>LSItemContentTypes</key>
		<array>
			<string>public.content</string>
		</array>
		<key>NSDocumentClass</key>
		<string>NSDocument</string>
	</dict>
</array>

With this, when you drag-and-drop a file onto your application, the application will receive a message saying the file has been dropped on your application. We limit this to files, by specifying the content type as public.content. If you wanted to accept any file-system based things, you would specify public.item. There is an inventory of the publicly declared types on the apple website.

To handle this, in objective C, you would add an openFile AppDelegate method, which would receive the name of the file, and allow you to process it – for example:

- (BOOL)application:(NSApplication *)theApplication openFile:(NSString *)filename
{
    NSLog(@"%@", filename);
    return YES;
}

In this case, all the code does is display the name of the file in the log, and state that the event has been handled.

This code can be seen on GitHub – this is tag v1.0 of the program. I’ll be adding more functionality in later posts.

On names and collation

Computers are wonderfully efficient at simple and mundane tasks, such as sorting lists of things. However, the most basic sorting done is the comparison of the character on a byte-by-byte basis. If we go back to the old days of ascii, this means that you end up with a case sensitive sorting, where capitals come ahead of lower case.

You then go to linux, and you start doing ls, and it ends up showing the list of filenames in a case insensitive manner and you go … that’s nice. You then go to OSX and do an ls, and it ends up showing the list of files in a case sensitive manner and you go – dude, not nice. Turns out that OSX’s libc uses la_LN.US-ASCII for collation on all en_ locales – i.e. plain old ascii.

Finder does not use this sorting, It’s sorting is done by sorting the names using the routine UCCompareTextDefault, with options that allow you to specify case insensitivity, and treating numbers as numbers, as well as some others. It’s pretty fancy.

However I’m talking about names. There are a wide variety of rules related to names, and I’ve had to do some odd stuff in my past.

The first thing I’ve been asked, is to omit the ‘The’ from collation – e.g. titles containing The as the start are instead sorted by the second word – so, for example The Last Supper would be listed under L, instead of T. Very much how you find it in the library under Last Supper, The.

Then there are Irish names. Please sort in library order was the request.

I had no idea how much of a rathole this was

First – accents or fadas, as we call them come after the letter of the same character so a, á, i, í, etc

Then stem the surname, for the most part, so Ó Loingsigh is sorted under L. There are a lot of surname prefixes in Irish – De, Fitz, Ó, Uí, Ní, Nic, Mac, Mc, Mag, Mhig, Nig, Mac Giolla, Ua – oh my! In general, you’re not supposed to collate under the prefix; except when it’s a Mac or Mc – they’re generally considered sorted as if Mc and Mac are the same. so McCarthy, MacLysaght…

I’ve been trying to track down the article that I used as a basis for this, but it appears to be based on an article by O’Deirg, — “Her infinite variety” – on the ordering of Irish surnames with prefixes, especially those of women.’ An Leabharlann 10 (1), 14–16. The best source reference to this I’ve been able to track down is an article by Róisín Nic Cóil, titled Irish prefixes and the alphabetization of personal names, which contains the common practices. They’re a lot lazier than the requested ordering I was asked to use, which is more in line with O’Deirg’s work.

Which brings me back to the original reason for writing this article — someone commented that all the books in their collection that started with the word The were sorted into the pool of books starting with T, rather than with their second name. Rather than um actually’ing the conversation, I decided to share my experience in the area with others.

Sorting/Collation … like date and time handling is complicated if you want to do it ‘right’, and right depends on where you’re asking from.

Why does a simple piece of c generate a SIGSEGV on linux as opposed to a SIGBUS on OSX

One of the lads in the office has the following piece of code:

main;

when compiled on linux, and run, it generates a SIGSEGV, but when run on OSX, it generates a SIGBUS,

Why?

The definition of SIGBUS is `Access to an undefined portion of a memory object.`, the definition of `SIGSEGV` is that it’s `Invalid memory reference.`.

The issue is that OSX translates a native MACH exception into a POSIX signal. on OSX, the route is through a translation layer, the code of which indicates:


case EXC_BAD_ACCESS:
if (code == KERN_INVALID_ADDRESS)
*ux_signal = SIGSEGV;
else
*ux_signal = SIGBUS;
break;

This indicates that if you crash with an address that is mapped into the process, but isn’t an invalid address, you’ll get a SIGBUS, however, if you crash with an address that isn’t mapped into the process, you’ll get a SIGSEGV, i.e. all addresses that are unmapped will generate SIGSEGV, while all address that are mapped, but aren’t usable in the way you ask will generate a SIGBUS.

There is some other code:

if (machine_exception(exception, code, subcode, ux_signal, ux_code))
return;

which deals with general protection faults e.g. pages that are simply not mapped.

when you compile the .c file, because you declared the variable main, it gets default storage/allocation which means it goes into the DATA segment of the application, none of which allows the execution of code from there.

When you try to run the program on OSX, it tries to execute the ‘code’ at main, which is data, the exception it gets at this point is bad access. The memory is available, but it isn’t mapped as executable, so you get a SIGBUS.

When you try to run this program on linux, you get a SEGV. The reason is simply because you’re trying to access memory that isn’t mapped for purpose.

When you change the code, to invoke a simple null pointer:


int main(int argc, char **argv) {
void (*crash)(void) = 0;
crash();
}

You get a SEGV, this is caused because there is no 0 mapped page visible to the user (you can use vmmap on processes to see what’s mapped/not mapped in this case).

.NET Date and Time Handling

So I started out reading the documentation for .NET date and time handling and I felt I needed a drink. If this is what .NET developers have to deal with, then no wonder so many of them have issues.

It starts with the DateTime structure. It’s purpose is to represent a Date and Time; however this structure possesses both the timestamp and the fact that it’s either a UTC time (i.e. no TZ information whatsoever), or a local time.

Obtaining a value for ‘now’ involves using the static properties DateTime.Now or DateTime.UtcNow. Once you’ve instantiated a DateTime value, you can’t actually change it, so for example doing DateTime.Now.ToUniversalTime() won’t actually change the structure, it returns a new instance of DateTime, which is converted based on the system’s current timezone settings, which means that you can actually get an incorrect value when the time straddles a daylight savings change.

There is an entire Microsoft document on best practices for dealing with DateTime values. It’s a bit tricky. Firstly, try to only deal with dates and times in UTC where possible. There is a major gotcha in the XmlSerializer code in the .NET 1.0/1.1 framework where it always treats the DateTime as localtime, even if you’ve passed in a UtcTime. This seems to have been fixed in later revisions of .NET; so hopefully you won’t encounter it.

Obtaining a DateTime object from a String is a matter of using the DateTime.parse(), you just have to ensure that the parse-string format matches the input string value. The default parse method uses the thread’s current Culture, which means you have to pay attention to things like the pesky mm/dd/yyyy vs dd/mm/yyyy conventions when you’re parsing. If you want to specify the formatting that you’re trying to parse in a manual fashion, you need to use the parse(String stringToParse, IFormatProvider provider) style, which allows you to use another Culture’s parsing format, or define your own.

If you’re planning on performing operations on a DateTime item, such as adding a few hours, days, etc to the original value bear in mind that in order to be safe, you should perform the operation with the timezone set to UTC (which doesn’t have any DST adjustments), and then swap it back to local time. This double trip is intended to avoid errors around ‘spring forward’ and ‘fall back’ times. Even if you don’t think you’re going to encounter those times it’s better safe than sorry.

Because a DateTime only stores a time in either ‘local’ or UTC, when you choose to persist this information you are best off storing the information in UTC and keeping an adjacent element for the timezone, should you actually choose to store it. The simplest way to store it is as the result of the Ticks property. This value is an integer, and is in ten-millionths of a second since 12:00 midnight, January 1, 0001 A.D. (C.E.) in the Gregorian calendar.

The DateTime structure has the ability to extract the Year, Month, Day, Hour, Minute, Second and Milliseconds from the value. It allows the extraction of the ‘Ticks’ value, but this is an absolute value, not an extraction. All these elements are in terms of the Gregorian calendar. It doesn’t matter if your current culture does not use the Gregorian calendar, it always returns the data in terms of the Gregorian calendar. I don’t know the reason for this choice, considering that it understands the current timezone; but it’s most likely due to the culture being a property of the current thread rather than a more ‘systemmy’ property.

If you want to extract based on a different calendar, such as the Persian calendar, you need to extract them via an instance of the calendar. The Persian Calendar is easily instanced from System.Globalization.PersianCalendar, but if you need the calendar for a specific region, you should instantiate a CultureInfo using new System.Globalization.CultureInfo(String CultureCode, bool userOverride), and then extract it’s Calendar property. In this case I can do:

var date = DateTime.utcNow;
CultureInfo cinfo = new System.Globalization.CultureInfo("EN-ie", false); // get the English-speaking, Ireland culture
int year = cinfo.Calendar.GetYear(date);
// This year is based on the calendar from Ireland (Gregorian), but you get the idea as to how to use it for other calendars.

The Calendar class deals with the breaking of a Date & Time into pieces. Those pieces are defined in terms of the calendar – this can be confusing if you’re only used to a single calendar, as many people are. I live in a region that uses the Gregorian calendar exclusively, so I deal with dates and times in those terms. The Gregorian calendar was formalized by Pope Gregory in 1582 (by that calendar’s reckoning), which was, in itself a correction to the Julian calendar. You can, in fact, get dates and times in terms of the Julian calendar, which was off because it insisted on a leap year every 4 years, with no exceptions – bear in mind that the Gregorian calendar has two exceptions to the rule – no leap years on years divisible by 100, but it does have a leap year if it’s also divisible by 400 – so, for example 1900 was not a leap year, but 2000 was a leap year, while 2100 will not be a leap year.

So with the .NET environment we have the DateTime, which represents a point in time, we then have the Calendar, which is a representation of the DateTime in terms of a particular calendar i.e it’s representation in terms of the year, month, day, day of week. There can me more delineations of the date & time – it’s up to the calendar to define them.

There are many ways to make an instance of a DateTime, possibly the simplest of which is to use the Now static property, but you can also construct them explicitly, using a variety of constructors, The simplest is the DateTime(ticks), which makes one in terms of the number of ticks since 0-time. But you can construct them made up of just the year, month and day; combined with an hour, minute and second; or even subsequently combined with a millisecond value. These can be specified as ‘local’ or UTC, and can also be specified in terms of a calendar. Once you’ve constructed the DateTime instance, it will no longer be represented in terms of that calendar, but will be represented in terms of the Gregorian calendar; e.g.

var cal = new System.Globalization.JapaneseCalendar();
DateTime aDateTime = new DateTime(2014, 1, 1, cal);
System.Diagnostics.Debug.Assert(aDateTime.Year == 2014);

This assertion will trigger, because it’s no longer in terms of the Japanese Calendar.

Parsing of DateTime values can be done using DateTime.parse(), which takes a string, and turns it into a DateTime. it uses the current Culture; unless it can be parsed as ISO 8601 – so when I’m here in little old ireland, if I put in a string of 1/2/2015, it tells me February 1, 2015, but if I’m in the US, it represents January 2, 2015. So you have to be careful when parsing DateTime values. This is where the IFormatProvider parameter comes in – this interface is generally not manually created by the user, but is, instead extracted from an instance of a culture e.g.

IFormatProvider if_us = new System.Globalization.CultureInfo("EN-us", false);
IFormatProvider if_ie = new System.Globalization.CultureInfo("EN-ie", false);
DateTime ieDate = DateTime.Parse("1/2/2015", if_ie);
DateTime usDate = DateTime.Parse("1/2/2015", if_us);
System.Diagnostics.Debug.Assert(ieDate.Year == usDate.Year);
System.Diagnostics.Debug.Assert(ieDate.Month != usDate.Month);
System.Diagnostics.Debug.Assert(ieDate.Day != usDate.Day);
System.Diagnostics.Debug.Assert(ieDate.Month == usDate.Day);
System.Diagnostics.Debug.Assert(ieDate.Day == usDate.Month);

None of these assertions trip, because we know/understand how the string is parsed by these cultures.

The next thing you generally need to do is deal with displaying times in different time zones. You already have a DateTime instance, which is either local, or UTC, and you want to turn it into a local time. So what you need is a TimeZoneInfo, and then going back to a DateTime that displays in that format:

TimeZoneInfo irelandtz = TimeZoneInfo.FindSystemTimeZoneById("GMT Standard Time");
TimeZoneInfo newyorktz = TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time");
var now = DateTime.UtcNow;
DateTime irelandnow = TimeZoneInfo.ConvertTimeFromUtc(now, irelandtz);
DateTime newyorknow = TimeZoneInfo.ConvertTimeFromUtc(now, newyorktz);
Console.WriteLine(" {0} {1}", irelandnow, irelandtz.IsDaylightSavingTime(irelandnow) ? irelandtz.DaylightName : irelandtz.StandardName);
Console.WriteLine(" {0} {1}", newyorknow, newyorktz.IsDaylightSavingTime(newyorknow) ? newyorktz.DaylightName : newyorktz.StandardName);

Now this code is awful, you are not actually picking the timezone by location – it’s by a name of timezone, rather than the location that specifies the timezone – e.g. Europe/Dublin and America/NewYork. There is a much better alternative for dealing with DateTime values, and that’s to use the NodaTime, which gives a little bit better an experience:

Instant now = SystemClock.Instance.Now;
DateTimeZone dublin = DateTimeZoneProviders.Tzdb["Europe/Dublin"];
DateTimeZone newyork = DateTimeZoneProviders.Tzdb["America/New_York"];
Console.WriteLine(new ZonedDateTime(now, dublin).ToString());
//2015-02-07T20:48:20 Europe/Dublin (+00)
Console.WriteLine(new ZonedDateTime(now, newyork).ToString());
//2015-02-07T15:48:20 America/New_York (-05)

Dealing with time on computers is so much fun

When a computer boots up, the operating system typically takes it’s current idea of what time it is from the hardware clock. On shutting down, it typically pushes back it’s own idea of what time it is back to the BIOS so that the clock is up-to-date relative to what time the operating system thinks it is.

The operating system while running tries to keep the concept of what time it is by having a counter it increments based on the kicking of some interval timer (on newer operating systems this is not strictly how it works, but the idea is similar).

The problem is, of course, that the thing that generates the timer isn’t completely accurate. As a result, over time, the operating system clock drifts in relation to what the real time is. This drift depends on the quality of the hardware going into the computer. Some are more expensive and thus don’t drift as much, some are cheap and drift quite a lot (some are so bad as to cause a drift of many minutes over the course of a day).

This is generally undesirable, I want to be reminded that I have a meeting before it actually starts so I can load up on a cup of coffee beforehand. How do we fix this?

The answer is to ask another computer and hope that it has a better idea of what time it is than this current one.

The most common protocol used for getting time over a network is the Network Time Protocol (NTP). In it’s simplest mode, you ask the remote system what time it is and then shift your time to match the time from the remote system by adjusting the time of your local system over time (to prevent confusing jumps in time where it goes from 9am to 8.15 instantly confusing you no end). The fancier methods involve asking multiple systems for the time, constructing a model of how crap your clock is relative to their clock and using that to constantly adjust your time to try to keep it in sync with the remote system. You periodically ask the remote systems for the time and use it to update your model. With this you can typically keep your clock within ~100ms of the time that’s considered ‘now’.

This is fine, we now have a relatively accurate clock which we can ask what time it is. The only problem is that depending on where you are in the world, your idea of what time it is has been altered by the wonderful concept of the timezone.

A properly written operating system should keep time without caring about what timezone you belong to. When you ask for a time it gives you a number which is unambiguous. This is vitally important when it comes to trying to perform operations on the time. If it’s unambiguous, then you can use it for mathematical operations such as determining the difference between two times. If it’s ambiguous then you’ll have trouble trying to do anything with it.

When you introduce the possibility of ambiguity into the representation of time, problems arise. For example, if I ask you how many minutes are there between 01:30 and 01:35, the obvious answer of 5 minutes my be incorrect because 01:30 is BST and 01:35 is GMT, which means the difference is 65 minutes; plus if I just showed you that time the day after a daylight savings transition without telling you what timezone it was in you wouldn’t be able to tell.

But what does this actually mean? For most developers, you should follow two cardinal rules:

  • Avoid changing the date from it’s unambiguous form until the last possible moment.
  • Always store the date in it’s unambiguous form.

If you have to store a date with a timezone, store the timezone as separate information to the actual unambiguous date, that way you can easily perform operations on the unambiguous time, and display the original time relative to the timezone it was supplied.

This doesn’t even get into the complexities of calendars. While the West generally uses on the Gregorian calendar, many other places in world don’t use that calendar (there are a lot of them).

Standard C (POSIX) is poorly supplied with an API for dealing with dates and times, especially in relation to timezones. You can generally set the timezone of the application, but it’s a thread-unsafe abomination that really should DIAF.

Most of the newer languages have significantly better date/time support. I’ll go through them in a series of posts, starting with C#.

A simple C dtrace aggregation consumer

This is a little bit of gluing from other sources, but it’s a simple piece of code to consume the content of an aggregation from dtrace using the `libdtrace` library. I’ve tested it on Mac OSX 10.9 and it works, the usual caveats that it wont work all the time, and you need privileged access to get it to run:

#include <assert.h>
#include <dtrace.h>
#include <stdio.h>

// This is the program.
static const char *progstr =
"syscall::recvfrom:return \
{ @agg[execname,pid] = sum(arg0); }";

// This is the aggregation walk function
static int
aggfun(const dtrace_aggdata_t *data, void *gndn __attribute__((unused)))
{
    dtrace_aggdesc_t *aggdesc = data->dtada_desc;
    dtrace_recdesc_t *name_rec, *pid_rec, *sum_rec;
    char *name;
    int32_t *ppid;
    int64_t *psum;
    static const dtrace_aggdata_t *count;

    if (count == NULL) {
        count = data;
        return (DTRACE_AGGWALK_NEXT);
    }

    // Our agrgegation has 4 records (id, execname, pid, sum)
    assert(aggdesc->dtagd_nrecs == 4);

    name_rec = &aggdesc->dtagd_rec[1];
    pid_rec = &aggdesc->dtagd_rec[2];
    sum_rec = &aggdesc->dtagd_rec[3];

    name = data->dtada_data + name_rec->dtrd_offset;
    assert(pid_rec->dtrd_size == sizeof(pid_t));
    ppid = (int32_t *)(data->dtada_data + pid_rec->dtrd_offset);
    assert(sum_rec->dtrd_size == sizeof(int64_t));
    psum = (int64_t *)(data->dtada_data + sum_rec->dtrd_offset);

    printf("%1$-30s %2$-20d %3$-20ld\n", name, *ppid, (long)*psum);
    return (DTRACE_AGGWALK_NEXT);
}

// set the option, otherwise print an error & return -1
int
set_opt(dtrace_hdl_t *dtp, const char *opt, const char *value)
{
    if (-1 == dtrace_setopt(dtp, opt, value)) {
        fprintf(stderr, "Failed to set '%1$s' to '%2$s'.\n", opt, value);
        return (-1);
    }
    return (0);
}

// set all the options, otherwise return an error
int
set_opts(dtrace_hdl_t *dtp)
{
    return (set_opt(dtp, "strsize", "4096")
        | set_opt(dtp, "bufsize", "1m")
        | set_opt(dtp, "aggsize", "1m")
        | set_opt(dtp, "aggrate", "2msec")
        | set_opt(dtp, "arch", "x86_64"));
}

int
main(int argc __attribute__((unused)), char **argv __attribute__((unused)))
{
    int err;
    dtrace_proginfo_t info;
    dtrace_hdl_t *dtp;
    dtrace_prog_t *prog;

    dtp = dtrace_open(DTRACE_VERSION, DTRACE_O_LP64, &err);

    if (dtp == 0) {
        perror("dtrace_open");
        return (1);
    }
    if (-1 == set_opts(dtp))
        return (1);

    prog = dtrace_program_strcompile(dtp, progstr, DTRACE_PROBESPEC_NAME, 0, 0, NULL);
    if (prog == 0) {
        printf("dtrace_program_compile failed\n");
        return (1);
    }
    if (-1 == dtrace_program_exec(dtp, prog, &info)) {
        printf("Failed to dtrace exec.\n");
        return (1);
    }
    if (-1 == dtrace_go(dtp)) {
        fprintf(stderr, "Failed to dtrace_go.\n");
        return (1);
    }

    while(1) {
        int status = dtrace_status(dtp);
        if (status == DTRACE_STATUS_OKAY) {
            dtrace_aggregate_snap(dtp);
            dtrace_aggregate_walk(dtp, aggfun, 0);
        } else if (status != DTRACE_STATUS_NONE) {
            break;
        }
        dtrace_sleep(dtp);
    }

    dtrace_stop(dtp);
    dtrace_close(dtp);
    return (0);
}

Code to change assert behaviour when running under a debugger

In Linux, you can create a __assert_fail routine which will be triggered when an assert is encountered in code compiled with asserts enabled. Using some debugger detection, you can have the code behave differently in that situation. The code to accomplish detection at load-time looks like:

#include <assert.h>
#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
#include <errno.h>
#include <sys/ptrace.h>

static int ptrace_failed;

static int
detect_ptrace(void)
{
    int status, waitrc;

    pid_t child = fork();
    if (child == -1) return -1;
    if (child == 0) {
        if (ptrace(PT_ATTACH, getppid(), 0, 0))
            exit(1);
        do {
            waitrc = waitpid(getppid(), &status, 0);
        } while (waitrc == -1 && errno == EINTR);
        ptrace(PT_DETACH, getppid(), (caddr_t)1, SIGCONT);
        exit(0);
    }
    do {
        waitrc = waitpid(child, &status, 0);
    } while (waitrc == -1 && errno == EINTR);
    return WEXITSTATUS(status);
}

__attribute__((constructor))
static void
detect_debugger(void)
{
    ptrace_failed = detect_ptrace();
}

void __assert_fail(const char * assertion, const char * file, unsigned int line, const char * function) {
    fprintf(stderr, "Assert: %s failed at %s:%d in function %s\n", assertion, file, line, function);
    if (ptrace_failed)
        raise(SIGTRAP);
    else
        abort();
}

This code, because it uses an __attribute__((constructor)) will detect a program being started under a debugger.

You could move the detect_debugger() call into the __assert_fail function, and this provides full run-time detection of the debugger and behaviour alteration. On the assumption that not a lot of asserts() get triggered, you have an effective behaviour alteration of your code when run under a debugger. In this case the __assert_fail looks like:

void __assert_fail(const char * assertion, const char * file, unsigned int line, const char * function) {
    fprintf(stderr, "Assert: %s failed at %s:%d in function %s\n", assertion, file, line, function);
    if (detect_ptrace())
        raise(SIGTRAP);
    else
        abort();
}

Launching a JVM from C++ on Mac OS X

I’d been playing around with starting up a JVM from C++ code on Windows. I was futzing around between MSVC and cygwin/mingw and it all worked well. Then I decided to do the same under Mac OS X.

Firstly, if you’re using the OS X VM, when you try to compile some simple code under clang, you’re going to see this awesome warning:

warning: 'JNI_CreateJavaVM' is deprecated [-Wdeprecated-declarations]

Apple really, really don’t want you using the JavaVM framework; in general if you want to use their framework, you have to ignore the warnings. and soldier on.

Secondly, if you want a GUI, you can’t instantiate the VM on the main thread. If you do that then your process will deadlock, and you won’t be able to see any of the GUI.

With that in mind, we hive-off the creation of the VM and main class to a separate thread. With some crufty code, we make a struct of the VM initialization arguments (this code is not clean it contains strdups and news and there’s no cleanup on free):

struct start_args {
    JavaVMInitArgs vm_args;
    const char *launch_class;

    start_args(const char **args, const char *classname) {
        vm_args.version = JNI_VERSION_1_6;
        vm_args.ignoreUnrecognized = JNI_TRUE;

        int arg_count = 0;
        const char **atarg = args;
        while (*atarg++) arg_count++;
        JavaVMOption *options = new JavaVMOption[arg_count];
        vm_args.nOptions = arg_count;
        vm_args.options = options;

        while (*args) {
            options->optionString = strdup(*args);
            options++;
            args++;
        }
        launch_class = strdup(classname);
    }
};

Next we have the thread function that launches the VM. This is a standard posix thread routine, so there’s no magic there.

void *
start_java(void *start_args)
{
    struct start_args *args = (struct start_args *)start_args;
    int res;
    JavaVM *jvm;
    JNIEnv *env;

    res = JNI_CreateJavaVM(&jvm, (void**)&env, &args->vm_args);
    if (res < 0) exit(1);
    /* load the launch class */
    jclass main_class;
    jmethodID main_method_id;
    main_class = env->FindClass(args->launch_class);
    if (main_class == NULL) {
        jvm->DestroyJavaVM();
        exit(1);
    }
    /* get main method */
    main_method_id = env->GetStaticMethodID(main_class, "main", "([Ljava/lang/String;)V");
    if (main_method_id == NULL) {
        jvm->DestroyJavaVM();
        exit(1);

    }
    /* make the initial argument */
    jobject empty_args = env->NewObjectArray(0, env->FindClass("java/lang/String"), NULL);
    /* call the method */
    env->CallStaticVoidMethod(main_class, main_method_id, empty_args);
    /* Don't forget to destroy the JVM at the end */
    jvm->DestroyJavaVM();
    return (0);
}

What this code does is Create the Java VM (short piece at the start). Then it finds and invokes the public static void main(String args[]) of the class that’s passed in. At the end, it destroys that Java VM. You’re supposed to do that; for memory allocation’s sake.

Next we have the main routine, which creates the thread and invokes the run loop

int main(int argc, char **argv)
{
    const char *vm_arglist[] = { "-Djava.class.path=.", 0 };
    struct start_args args(vm_arglist, "launch");
    pthread_t thr;
    pthread_create(&thr, NULL, start_java, &args);
    CFRunLoopRun();
}

The trick is the CFRunLoopRun() at the end. What this does is triggers the CoreFoundation main application run-loop, which allows the application to pump messages for all the other run-loops that are created by the java UI framework.

The next thing is an example java file that creates a window.

public class launch extends JFrame {
    JLabel emptyLabel;

    public launch() {
        super("FrameDemo");
        emptyLabel = new JLabel("Hello World");

        setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        getContentPane().add(emptyLabel, BorderLayout.CENTER);
        pack();
    }

    public static void main(String args[]) {
        SwingUtilities.invokeLater(new Runnable() {
            public void run() {
                launch l = new launch();
                l.setVisible(true);
            }
        });
    }
}

Because it’s got a EXIT_ON_CLOSE option, when you close the window the application will terminate.

Gluing this into a makefile involves the following; it’s got conditional sections for building with either the 1.6 or 1.7 VM, and there are a couple of changes that are needed in the cpp code to get this to work:

JAVA_VERSION?=6

JHOME:=$(shell /usr/libexec/java_home -v 1.$(JAVA_VERSION))
SYSTEM:=$(shell uname -s)

CPPFLAGS += -DJAVA_VERSION=$(JAVA_VERSION)
CXXFLAGS += -g
ifeq ($(JAVA_VERSION),7)
VM_DIR=$(JHOME)/jre/lib
LDFLAGS += -L$(VM_DIR)/server -Wl,-rpath,$(VM_DIR) -Wl,-rpath,$(VM_DIR)/server
CXXFLAGS += -I$(JHOME)/include -I$(JHOME)/include/$(SYSTEM)
LDLIBS += -ljvm
else
CXXFLAGS += -framework JavaVM
endif

CXXFLAGS += -framework CoreFoundation

all: launch launch.class

launch.class: launch.java
	/usr/libexec/java_home -v 1.$(JAVA_VERSION) --exec javac launch.java

clean:
	rm -rf launch *.class *.dSYM

For the C++, you need to change the include path depending on whether you’re building for the 1.6 or 1.7 VM.

#if JAVA_VERSION == 6
#include <JavaVM/jni.h>
#else
#include <jni.h>
#endif

.

You can’t really use the code that was compiled against the 1.6 VM on the 1.7 environment, as it uses the JavaVM framework in 1.6, vs. the libjni.dylib on the 1.7 VM, but I would consider this a given across all JVM variants.

This source can be cloned from the Repository on BitBucket.