MWorks doesn't provide any built-in support for this. Probably the simplest approach would be to record the screen while showing all the stimuli and then edit the resulting video into separate clips for each stimulus configuration. If you prefer to generate GIF's, it looks like you have some options.
So there is no way of recreating what exactly was on the screen, offline? Say in matlab or python? I am talking of a script that offline reproduces every pixel of every frame?
That said, I have done some work on automatic capture of stimulus display frames to the event stream. In short, every time the stimulus display is updated, the contents of the display are written, as binary image data, to a new system variable (#stimDisplayCapture), which is recorded in the event file. The capture process is expensive in terms of CPU and GPU usage, although that expense is proportional to the resolution of the captured images.
This is very experimental at the moment, and it may never work well enough to become a supported feature. However, if you want to try it, I can provide you with an MWorks build that includes it. At present, if you want to capture images at a scaled-down resolution, the dimensions must be hard coded, so I'll need to know what dimensions you want. Alternatively, I can code it so that the images are the same resolution as the display, but, as I said, this can be very challenging even for a relatively powerful system.
regarding the #stimDisplayCapture I am currently capturing a 400x400 pixel
portion of the mirror window. I draw the portion of the window manually
with Quicktime before starting the experiment.
If the acquisition of the frame works on the mirror window, I will arrange
the stimuli's and window's size accordingly. But I could also use the
parameter that cuts out a portion of the screen. It makes little difference
Given that I will generate 880 combinations, each lasting 6 seconds, I
would need to have a rather good control of the timing. Will each frame
have a timestamp?
If the acquisition of the frame works on the mirror window, I will arrange the stimuli's and window's size accordingly.
I hadn't thought of that approach, but it seems like a good idea. If you configure MWServer to display only the mirror window and set the mirror window's size and aspect ratio as desired, then MWorks can just capture the mirror window at full resolution and write the result to #stimDisplayCapture. If this sounds good to you, I'll make an MWorks build that does this.
Will each frame have a timestamp?
Yes. Each assignment to #stimDisplayCapture will have the same timestamp as the corresponding #stimDisplayUpdate event.
As we discussed, it will capture the main display window at full resolution. For your purposes, you want the "main" window to be the mirror window, so you should select "Mirror window only" in MWServer's display preferences.
Every time MWorks renders a new frame for the stimulus display, it will capture that frame in PNG format and write the data to the variable #stimDisplayCapture. If you extract the value associated with a #stimDisplayCapture event (which will be a bytes object in Python or a uint8 array in MATLAB) and write it as binary data to a file, you should be able to open that file in an image viewer.
To quickly confirm that the display capture is working correctly, run your experiment with MWClient's Image Viewer window open, and set its "Image data variable" field to #stimDisplayCapture. If things are working, you should see a copy of MWServer's mirror window in the image viewer.
Since this is still experimental code, I don't recommend using it for production experiments. If you have problems, please let me know!
I tried to compile a custom plugin that Ralf wrote to generate the stimuli
I need to make the clips of, but I encountered an error.
The same plugin compiled successfully on version 0.10 on the same machine.
Attached the plugin and the errors I encountered while trying to compile
I am inexperienced when it comes to mworks plugins and I would not know
where to start, I hope it is something trivial you could spot faster so
that I can focus on fixing it (or, as always, ask Ralf if I fail).
Thanks again for testing the in-progress version of stimulus display frame capture. This feature is now fully implemented and available in the MWorks nightly build. For more info, please see this discussion. (FYI, the user in that discussion is the one who initially requested the ability to record the stimulus display, and the in-progress code I shared with you represented my to-date efforts to add the capability.)
If you want to provide more feedback or suggest further improvements, please feel free to do so, either here or in the new discussion.