It sounds like the same root problem as before: At some point in the experiment, the rate at which events are written falls drastically below the rate at which events are generated, producing a huge backlog, and the data file can't close until the backlog is cleared. Please see the linked comment for my suggestions on how to proceed.
The log file confirms that writing events to disk is taking an unreasonably long time (e.g. 100ms or more per 1000 events, where a reasonable duration would be <10ms). I can induce this problem on my Mac Pro by using a stress-testing tool that loads the system with disk writes.
I've also discovered that I can eliminate the problem, even with the stress-testing tool running, by setting SQLite's "synchronous" flag to "OFF". Disregarding for a moment whether this is really a good idea, it'd be interesting to know if this change resolves the issue for you, too. If you're willing to try it, I've created a modified build of MWorks that you can get at
It's identical to the current nightly build, except for the change to the synchronous flag.
I should note that, when I start the stress-testing tool, Activity Monitor's data written/sec figure jumps from around 60MB to 700MB or more. Since you aren't seeing a similarly high write volume, you may be experiencing a different issue. Still, I think this is a worthwhile test to run.