Generating videos of stimuli

acalapai's Avatar

acalapai

24 Nov, 2020 06:30 PM

This comment was split from the discussion: Arc in MWorks

Hi Chris,

We are planning to conduct an experiment online via a platform called
labvanced. It is a cognitive task so we need very simple stimuli which we
have generated with a plugin that Ralf wrote. Each stimulus has three
features, motion, color, shape. Usually with our monkeys we generate these
stimuli online with mworks and run the task like any other task, but in
labvanced we cannot generate such stimuli, mostly because there is no
motion engine on the platform.

So our current strategy is to have a little GIF or video clip for every
combination of these features and upload those clips on labvanced as
animated / moving stimuli.

My question for you would then be: is there a way of generating a video or
a series of images from MWorks that I can then quickly put together in
python or matlab to create a gif?

My plan would be to use mworks to generate the stimuli, show one stimulus
at a time on the screen, save each frame as an image into a specific
folder, cycle through the folders with python to generate one gif for each
stimulus.

I am not sure if it is possible in MWorks to export a frame into a picture
in an automatic way.

I hope this is clear, in case it is not I would also be available for a
skype call to explain better.

thanks in any case
best
Antonino

  1. Support Staff 1 Posted by Christopher Sta... on 24 Nov, 2020 06:55 PM

    Christopher Stawarz's Avatar

    Hi Antonino,

    MWorks doesn't provide any built-in support for this. Probably the simplest approach would be to record the screen while showing all the stimuli and then edit the resulting video into separate clips for each stimulus configuration. If you prefer to generate GIF's, it looks like you have some options.

    Cheers,
    Chris

  2. 2 Posted by acalapai on 25 Nov, 2020 10:25 AM

    acalapai's Avatar

    Thanks again Chris,

    so there is no way of recreating what exactly was on the screen, offline?
    Say in matlab or python?
    I am talking of a script that offline reproduces every pixel of every frame?

    lg
    a

  3. Support Staff 3 Posted by Christopher Sta... on 02 Dec, 2020 01:54 PM

    Christopher Stawarz's Avatar

    Hi Antonino,

    Apologies for the delayed reply.

    So there is no way of recreating what exactly was on the screen, offline? Say in matlab or python? I am talking of a script that offline reproduces every pixel of every frame?

    No.

    That said, I have done some work on automatic capture of stimulus display frames to the event stream. In short, every time the stimulus display is updated, the contents of the display are written, as binary image data, to a new system variable (#stimDisplayCapture), which is recorded in the event file. The capture process is expensive in terms of CPU and GPU usage, although that expense is proportional to the resolution of the captured images.

    This is very experimental at the moment, and it may never work well enough to become a supported feature. However, if you want to try it, I can provide you with an MWorks build that includes it. At present, if you want to capture images at a scaled-down resolution, the dimensions must be hard coded, so I'll need to know what dimensions you want. Alternatively, I can code it so that the images are the same resolution as the display, but, as I said, this can be very challenging even for a relatively powerful system.

    Let me know if you're interested.

    Cheers,
    Chris

  4. 4 Posted by acalapai on 03 Dec, 2020 08:32 AM

    acalapai's Avatar

    Dear Chris,

    first of all do not worry at all about delays.

    I would be very happy to check this out.

    lg
    a

  5. 5 Posted by acalapai on 20 Jan, 2021 12:59 PM

    acalapai's Avatar

    Hi there Chris,

    regarding the #stimDisplayCapture I am currently capturing a 400x400 pixel
    portion of the mirror window. I draw the portion of the window manually
    with Quicktime before starting the experiment.

    If the acquisition of the frame works on the mirror window, I will arrange
    the stimuli's and window's size accordingly. But I could also use the
    parameter that cuts out a portion of the screen. It makes little difference
    for me.

    Given that I will generate 880 combinations, each lasting 6 seconds, I
    would need to have a rather good control of the timing. Will each frame
    have a timestamp?

    thanks
    LG
    a

  6. Support Staff 6 Posted by Christopher Sta... on 21 Jan, 2021 03:05 PM

    Christopher Stawarz's Avatar

    Hi Antonino,

    If the acquisition of the frame works on the mirror window, I will arrange the stimuli's and window's size accordingly.

    I hadn't thought of that approach, but it seems like a good idea. If you configure MWServer to display only the mirror window and set the mirror window's size and aspect ratio as desired, then MWorks can just capture the mirror window at full resolution and write the result to #stimDisplayCapture. If this sounds good to you, I'll make an MWorks build that does this.

    Will each frame have a timestamp?

    Yes. Each assignment to #stimDisplayCapture will have the same timestamp as the corresponding #stimDisplayUpdate event.

    Cheers,
    Chris

  7. 7 Posted by acalapai on 21 Jan, 2021 04:18 PM

    acalapai's Avatar

    Well, that sounds optimal!

    I look forward to test this!

    LG

    a

  8. Support Staff 8 Posted by Christopher Sta... on 23 Jan, 2021 05:19 PM

    Christopher Stawarz's Avatar

    Hi Antonino,

    The build with display capture support is now available to download.

    As we discussed, it will capture the main display window at full resolution. For your purposes, you want the "main" window to be the mirror window, so you should select "Mirror window only" in MWServer's display preferences.

    Every time MWorks renders a new frame for the stimulus display, it will capture that frame in PNG format and write the data to the variable #stimDisplayCapture. If you extract the value associated with a #stimDisplayCapture event (which will be a bytes object in Python or a uint8 array in MATLAB) and write it as binary data to a file, you should be able to open that file in an image viewer.

    To quickly confirm that the display capture is working correctly, run your experiment with MWClient's Image Viewer window open, and set its "Image data variable" field to #stimDisplayCapture. If things are working, you should see a copy of MWServer's mirror window in the image viewer.

    Since this is still experimental code, I don't recommend using it for production experiments. If you have problems, please let me know!

    Chris

  9. 9 Posted by acalapai on 25 Jan, 2021 04:40 PM

    acalapai's Avatar

    Hey Chris,

    I downloaded the build and installed it.

    I tried to compile a custom plugin that Ralf wrote to generate the stimuli
    I need to make the clips of, but I encountered an error.
    The same plugin compiled successfully on version 0.10 on the same machine.

    Attached the plugin and the errors I encountered while trying to compile
    it.

    I am inexperienced when it comes to mworks plugins and I would not know
    where to start, I hope it is something trivial you could spot faster so
    that I can focus on fixing it (or, as always, ask Ralf if I fail).

    Thanks in advance
    LG
    a

  10. Support Staff 10 Posted by Christopher Sta... on 25 Jan, 2021 09:03 PM

    Christopher Stawarz's Avatar

    Hi Antonino,

    The library file libboost_system.a no longer exists. If you delete it from the "Frameworks & Libraries" section in the Xcode sidebar (see the attached image), you should be able to build the plugin.

    Cheers,
    Chris

  11. 11 Posted by acalapai on 28 Jan, 2021 09:12 AM

    acalapai's Avatar

    Hey There,

    everything worked perfectly.

    I have created all the stimuli I needed.

    What type of feedback can I offer you?

    best
    Antonino

  12. Support Staff 12 Posted by Christopher Sta... on 29 Apr, 2021 01:03 PM

    Christopher Stawarz's Avatar

    Hi Antonino,

    Thanks again for testing the in-progress version of stimulus display frame capture. This feature is now fully implemented and available in the MWorks nightly build. For more info, please see this discussion. (FYI, the user in that discussion is the one who initially requested the ability to record the stimulus display, and the in-progress code I shared with you represented my to-date efforts to add the capability.)

    If you want to provide more feedback or suggest further improvements, please feel free to do so, either here or in the new discussion.

    Cheers,
    Chris

Reply to this discussion

Internal reply

Formatting help / Preview (switch to plain text) No formatting (switch to Markdown)

Attaching KB article:

»

Attached Files

You can attach files up to 10MB

If you don't have an account yet, we need to confirm you're human and not a machine trying to post spam.

Keyboard shortcuts

Generic

? Show this help
ESC Blurs the current field

Comment Form

r Focus the comment reply box
^ + ↩ Submit the comment

You can use Command ⌘ instead of Control ^ on Mac