MWorks questions

sbohn8's Avatar

sbohn8

08 Nov, 2017 03:13 PM

Hi Chris,

My name is Simon and I am working with Arash and Yvonne at NIH. I have heard from Arash and from reading the discussion forum that MWEL is the future of MWorks, so I am trying to learn it directly instead of spending time on the editor. Yvonne has shared with me the code for her experiment and I am struggling to understand it. My difficulty is in wrapping my head around the general architecture of the program. I think it is partially because there is a lot going on in that experiment. In one of your previous responses on the discussion forum you offered to make some custom examples. Do you have a “Hello World” example in MWEL that showcases the building blocks and flow of an experiment with a Python bridge? Alternatively, if it is easier I would be happy to skype/call you and chat about the general structure of MWorks experiments.

I know documentation is an in-progress part of the project and you have a lot on your plate so after I am acquainted with the language I would be more than happy to make a Getting Started guide for future people going down this path.

All the best,

Simon Bohn

  1. Support Staff 1 Posted by Christopher Sta... on 08 Nov, 2017 09:07 PM

    Christopher Stawarz's Avatar

    Hi Simon,

    I have heard from Arash and from reading the discussion forum that MWEL is the future of MWorks, so I am trying to learn it directly instead of spending time on the editor.

    I wholeheartedly endorse this approach. I'm hopeful that MWEL will indeed be the future of MWorks. Thank you for making the effort to learn it! I'd value any feedback you have.

    Also, in case you weren't aware, there is some documentation on MWEL that's definitely worth reading.

    My difficulty is in wrapping my head around the general architecture of the program.

    At the most basic level, MWorks experiments are composed of one or more protocols, along with a set of shared supporting components (variables, I/O devices, visual stimuli, etc.). This is true regardless of whether the experiment is created in MWEditor or written in MWEL.

    Protocols encompass all the runtime logic of the experiment. They contain actions, trials, task systems (i.e. finite state machines), and other paradigm components (i.e. control-flow constructs). After loading the experiment via MWClient, you select the protocol you want to run from the drop-down menu, then press the play button to begin executing it.

    Do you have a “Hello World” example in MWEL that showcases the building blocks and flow of an experiment with a Python bridge?

    I've attached an example experiment that contains three different "Hello World" protocols. The first two are very simple, while the third is considerably more sophisticated and includes I/O, a task system, and multiple trials. MWorks offers you considerable flexibility in the design of your experiment, but the third protocol demonstrates a fairly typical high-level structure.

    As for Python integration, are you interested in running a client-side analysis script (Python bridge), or do you want to execute Python code in the server as part of your experimental logic?

    Cheers,
    Chris

  2. Support Staff 2 Posted by Christopher Sta... on 08 Nov, 2017 09:09 PM

    Christopher Stawarz's Avatar

    I know documentation is an in-progress part of the project and you have a lot on your plate so after I am acquainted with the language I would be more than happy to make a Getting Started guide for future people going down this path.

    This would be greatly appreciated! Thanks!

    Chris

  3. 3 Posted by Simon Bohn on 29 Nov, 2017 08:35 PM

    Simon Bohn's Avatar

    Hi Chris,

    Sorry for the delayed response. Thanksgiving holidays and other lab stuff
    got in the way. Thanks to your examples, I think I have a pretty good idea
    of the basics of the language. That said, I am still confused by how to
    programmatically duplicate things. Our experiment will have an RSVP
    component where images are shown in rapid succession. I understand how to
    create a stimulus group out of a folder of images with a list replicator
    and how to queue and display a single image but I do not get how to have
    100 images randomly queued and displayed from the stimulus group, with the
    spiking during their showing time reported to a python script for
    on-the-spot analysis.

    I tried to make a "proof of concept" list replicator that would just print
    to the console the strings in a list but I ran into the issue that I cannot
    have actions in a list replicator. I was thinking along the lines that I
    would have a list_replicator within a trial component that would generate
    the queue_stimulus for each and the trial component would deal with queuing
    them randomly. But I am not sure if that is the right direction to be
    moving. Is this the right direction? I would appreciate any guidance you
    can offer.

    Also, is there any kind of debugging construct in mwel that can be used
    inside a list_replicator, kind of like how one might use print statements
    in every nook and cranny of a C conditional and loop to figure out if
    various parts were firing?

    Thanks,

    Simon

  4. Support Staff 4 Posted by Christopher Sta... on 01 Dec, 2017 03:24 PM

    Christopher Stawarz's Avatar

    Hi Simon,

    I do not get how to have 100 images randomly queued and displayed from the stimulus group, with the spiking during their showing time reported to a python script for on-the-spot analysis.

    The simplest thing would be to use a selection variable to handle the randomization. For example, if you have 100 images in your stimulus group, and you want to display them in random order with no repeats:

    selection image_index (
        values = 0:99
        selection = random_without_replacement
        )
    

    Then, in your protocol:

    reset_selection (image_index)
    trial (nsamples = 100) {
        next_selection (image_index)
        queue_stimulus (images[image_index])
        // Display the image, etc...
        accept_selections (image_index)
    }
    

    As for viewing spikes, it depends on how you're acquiring the neural data. If you're using an external recording system (e.g. Plexon, Open Ephys), then you first have to get the relevant data back to MWorks.

    Can you tell me more about your recording setup?

    I tried to make a "proof of concept" list replicator that would just print to the console the strings in a list but I ran into the issue that I cannot have actions in a list replicator.

    Actually, you can add actions. They just need to be inside a paradigm component. For a generic list of actions, the list component is a good choice.

    MWorks' replicators (list and range) can be tricky to use, primarily because they're expanded when the experiment is parsed, not when it's run. While they can be very useful, they aren't usually my first choice when trying to achieve run-time repetition. The combination of a selection variable and a task system, while potentially less elegant than a replicator, is usually easier to implement and debug.

    Also, is there any kind of debugging construct in mwel that can be used inside a list_replicator

    As I mentioned, replicators are expanded at parse time, not run time, so you can't use report actions or variable assignments to debug them.

    Chris

  5. 5 Posted by Simon Bohn on 02 Jan, 2018 07:24 PM

    Simon Bohn's Avatar

    Hi Chris, just checking in. This is still an issue for us. Do you have a
    working eyelink test script I could run?

    Thanks,

    Simon

  6. Support Staff 6 Posted by Christopher Sta... on 03 Jan, 2018 04:34 PM

    Christopher Stawarz's Avatar

    Hi Simon,

    This is still an issue for us.

    What is still an issue?

    Do you have a working eyelink test script I could run?

    I've attached a very basic experiment that connects to an EyeLink and stores raw eye positions in MWorks variables. If you can tell me more about the problem(s) you're having, I may be able to provide more guidance.

    Chris

  7. 7 Posted by Simon Bohn on 03 Jan, 2018 05:15 PM

    Simon Bohn's Avatar

    Hi Chris, looks like my first email didn't make it through, sorry about
    that. Luckily though, I figured out the solution this morning. It turns out
    the computer I was testing on did not have the Eyelink dev kit installed,
    which caused the following error:

    RROR: Failed to create object.

    Extended information:

    reason: No factory for object type: iodevice/eyelink

    location: at line 4

    object_type: iodevice/eyelink

    ref_id: idm34907493993296

    component: eyelink

    parser_context: mw_create

    I tested on a computer with dev kit installed and everything appears to be
    working. Problem solved. It does bring up a new question, though. Is there
    any way to synchronize calibration between the eyelink host computer and
    mworks display machine? We are sending analog data from the eyelink to a
    TDT recording system for archival purposes and both that stream and mworks
    should be calibrated.

    Thanks,

    Simon

  8. 8 Posted by Simon Bohn on 04 Jan, 2018 08:39 PM

    Simon Bohn's Avatar

    Chris,

    Sorry to bombard you with questions about the eyelink, but I am
    encountering a new problem. I had thought it was working correctly
    yesterday but since I was only one person I couldn't verify it with the eye
    tracker test window, I could only see that the experiment was properly
    starting and initializing the eyelink. Today Arash, Yvonne and I tried to
    use the calibration protocol from the white noise experiment with one of
    our eyes being tracked. While looking for eye traces on the eye window, no
    traces appeared. Upon opening the window, an error along the lines of
    "Eyetracker could not find variables: ,,,,," appeared in the console. I
    don't really know how the eyetracker communicates with mworks so I am at a
    bit of a loss for how to troubleshoot this and would greatly appreciate
    your guidance. Perhaps we could set up a time for you to call and walk me
    through it?

    Best,

    Simon Bohn

  9. Support Staff 9 Posted by Christopher Sta... on 08 Jan, 2018 04:35 PM

    Christopher Stawarz's Avatar

    Hi Simon,

    Is there any way to synchronize calibration between the eyelink host computer and mworks display machine? We are sending analog data from the eyelink to a TDT recording system for archival purposes and both that stream and mworks should be calibrated.

    I'll preface my response by saying that I'm not an EyeLink expert (or even an EyeLink user). I didn't write MWork's EyeLink interface, and my knowledge of EyeLink stuff comes mostly from reading the user manuals (which, if you have the dev kit installed, are in /Applications/Eyelink/docs).

    I think the key questions here are (1) where are you performing your calibration and (2) what data are you sending to MWorks and the EyeLink's analog outputs?

    If you're doing your calibration on the EyeLink host computer, then I assume you're sending one of the calibrated data formats (HREF or gaze) to both MWorks and the analog outputs. In that case, you should already have the same eye positions in both places (modulo the scaling done to map eye position to voltage; see section 7.4.3 in the EyeLink 1000 User Manual).

    If you're calibrating inside MWorks, then I assume you're sending MWorks the raw (aka pupil) eye coordinates, which are the only position data you can get without performing an EyeLink-side calibration. In this case, I don't think there's a way to convey the MWorks-computed calibration back to the EyeLink hardware, so you're stuck with having raw, uncalibrated eye positions in your TDT recordings. However, for offline analysis, you can extract MWorks' calibration parameters from your event file and convert those raw positions in to MWorks coordinates (again, after inverting the position-to-voltage scaling).

    While looking for eye traces on the eye window, no traces appeared. Upon opening the window, an error along the lines of "Eyetracker could not find variables: ,,,,," appeared in the console.

    I think the error was actually "Eye window can't find the following variables...". It indicates that you haven't told MWClient's eye window where to find the data it needs.

    To fix this, open the eye window, and click the "Options" button. Then, enter appropriate variable names for X, Y, and Saccade. For X and Y, use the variables that contain the calibrated X and Y eye coordinates. For Saccade, use the same variable as the eye_state in your Eye Monitor.

    Chris

  10. 10 Posted by Simon Bohn on 09 Jan, 2018 03:43 PM

    Simon Bohn's Avatar

    Thanks Chris, got it all figured out. In addition to not understanding how
    the eye window worked, it turns out another reason it was not working was
    because we were only sending monocular data from the Eyelink to mworks, so
    no eye_x/eye_y/eye_z position was being computed and the values were null.

    Thanks again,

    Simon

  11. Christopher Stawarz closed this discussion on 29 Jan, 2018 02:58 PM.

Comments are currently closed for this discussion. You can start a new one.

Keyboard shortcuts

Generic

? Show this help
ESC Blurs the current field

Comment Form

r Focus the comment reply box
^ + ↩ Submit the comment

You can use Command ⌘ instead of Control ^ on Mac