Long saving time
Hi Chris,
If you remember, a while ago, i had an issue with data not being saved. While this issue is resolved; I have a different one now. Mworks takes ages closing the data file (more than 1.5 hours at this moment and going). The file size so far is about 3.6GB. I am using v 0.9 as well. Any idea what causes this? if it's unavoidable, is it possible to have another instance of Mworks to start my next experiment?
Cheers,
Beshoy
Keyboard shortcuts
Generic
? | Show this help |
---|---|
ESC | Blurs the current field |
Comment Form
r | Focus the comment reply box |
---|---|
^ + ↩ | Submit the comment |
You can use Command ⌘
instead of Control ^
on Mac
Support Staff 1 Posted by Christopher Sta... on 22 Oct, 2019 08:05 PM
Hi Beshoy,
It sounds like the same root problem as before: At some point in the experiment, the rate at which events are written falls drastically below the rate at which events are generated, producing a huge backlog, and the data file can't close until the backlog is cleared. Please see the linked comment for my suggestions on how to proceed.
Cheers,
Chris
2 Posted by beshoy.agayby on 23 Oct, 2019 10:57 AM
Hi Chris,
I checked the data read and write rates. During the experiment. THe data read/sec is around 40KB and the written is about 13-25MB. Are those normal numbers?
Cheers,
Beshoy
Support Staff 3 Posted by Christopher Sta... on 23 Oct, 2019 07:25 PM
Hi Beshoy,
Yes, those are totally reasonable I/O rates.
Can you try running the special MWorks build I provided previously? The log file it generates should provide some insight into what's happening.
Chris
4 Posted by beshoy.agayby on 27 Nov, 2019 11:26 AM
Hi Chris,
I tried this special build, but i am not sure how and where to find the log file to send you.
Cheers,
Beshoy
Support Staff 5 Posted by Christopher Sta... on 27 Nov, 2019 03:20 PM
Hi Beshoy,
It should be in the same place as before:
/tmp/mwserver_event_file_log.txt
.Thanks,
Chris
6 Posted by beshoy.agayby on 28 Nov, 2019 01:19 PM
Hi Chris,
I attached the log file. Again, it took hours to finish saving the file.
Thanks in advance for the help.
Cheers,
Beshoy
Support Staff 7 Posted by Christopher Sta... on 02 Dec, 2019 08:41 PM
Hi Beshoy,
Thanks for the log file. I'll take a look and see if it provides any new insight into the problem.
Chris
Support Staff 8 Posted by Christopher Sta... on 05 Dec, 2019 09:02 PM
Hi Beshoy,
The log file confirms that writing events to disk is taking an unreasonably long time (e.g. 100ms or more per 1000 events, where a reasonable duration would be <10ms). I can induce this problem on my Mac Pro by using a stress-testing tool that loads the system with disk writes.
I've also discovered that I can eliminate the problem, even with the stress-testing tool running, by setting SQLite's "synchronous" flag to "OFF". Disregarding for a moment whether this is really a good idea, it'd be interesting to know if this change resolves the issue for you, too. If you're willing to try it, I've created a modified build of MWorks that you can get at
https://www.dropbox.com/s/chojes7fa5eepl9/MWorks-sqlite_synchronous...
It's identical to the current nightly build, except for the change to the synchronous flag.
I should note that, when I start the stress-testing tool, Activity Monitor's data written/sec figure jumps from around 60MB to 700MB or more. Since you aren't seeing a similarly high write volume, you may be experiencing a different issue. Still, I think this is a worthwhile test to run.
Thanks,
Chris
9 Posted by beshoy.agayby on 07 Dec, 2019 05:29 PM
Hi Chris,
Thanks for the modified version. I will test it on Monday and let you know how it goes!
Cheers,
Beshoy
10 Posted by beshoy.agayby on 09 Dec, 2019 06:27 PM
Hi Chris,
This worked! File size looks appropriate and I took a quick look at the events and things seem in order.
Cheers,
Beshoy
Support Staff 11 Posted by Christopher Sta... on 10 Dec, 2019 09:22 PM
Hi Beshoy,
That's great news! I have to think a bit more about whether this should be the default configuration going forward, although my current feeling is that it should.
Cheers,
Chris