List of Plugins¶
These are the plugins that are included in the jsPsych release.
Additional plugins may be available in the community contributions repository.
For an overview of what plugins are and how they work, see our plugins overview.
Plugin | Description |
---|---|
animation | Shows a sequence of images at a specified frame rate. Records key presses (including timing information) made by the participant while they are viewing the animation. |
audio‑button‑response | Play an audio file and allow the participant to respond by choosing a button to click. The button can be customized extensively, e.g., using images in place of standard buttons. |
audio‑keyboard‑response | Play an audio file and allow the participant to respond by pressing a key. |
audio‑slider‑response | Play an audio file and allow the participant to respond by moving a slider to indicate a value. |
browser‑check | Measures various features of the participant's browser and runs an inclusion check to see if the browser meets a custom set of criteria for running the study. |
call‑function | Executes an arbitrary function call. Doesn't display anything to the participant, and the participant is usually unaware that this plugin has even executed. It's useful for performing tasks at specified times in the experiment, such as saving data. |
canvas‑button‑response | Draw a stimulus on a HTML canvas element, and record a button click response. Useful for displaying dynamic, parametrically-defined graphics, and for controlling the positioning of multiple graphical elements (shapes, text, images). |
canvas‑keyboard‑response | Draw a stimulus on a HTML canvas element, and record a key press response. Useful for displaying dynamic, parametrically-defined graphics, and for controlling the positioning of multiple graphical elements (shapes, text, images). |
canvas‑slider‑response | Draw a stimulus on a HTML canvas element, and ask the participant to respond by moving a slider to indicate a value. Useful for displaying dynamic, parametrically-defined graphics, and for controlling the positioning of multiple graphical elements (shapes, text, images). |
categorize‑animation | The participant responds to an animation and can be given feedback about their response. |
categorize‑html | The participant responds to an HTML-formatted stimulus using the keyboard and can be given feedback about the correctness of their response. |
categorize‑image | The participant responds to an image using the keyboard and can be given feedback about the correctness of their response. |
cloze | Plugin for displaying a cloze test and checking participants answers against a correct solution. |
external‑html | Displays an external HTML page (such as a consent form) and lets the participant respond by clicking a button or pressing a key. Plugin can validate their response, which is useful for making sure that a participant has granted consent before starting the experiment. |
free‑sort | Displays a set of images on the screen in random locations. Participants can click and drag the images to move them around the screen. Records all the moves made by the participant, so the sequence of moves can be recovered from the data. |
fullscreen | Toggles the experiment in and out of fullscreen mode. |
html‑audio‑response | Display an HTML-formatted stimulus and records an audio response via a microphone. |
html‑button‑response | Display an HTML-formatted stimulus and allow the participant to respond by choosing a button to click. The button can be customized extensively, e.g., using images in place of standard buttons. |
html‑keyboard‑response | Display an HTML-formatted stimulus and allow the participant to respond by pressing a key. |
html‑slider‑response | Display an HTML-formatted stimulus and allow the participant to respond by moving a slider to indicate a value. |
html‑video‑response | Display an HTML-formatted stimulus and records video data via a webcam. |
iat‑html | The implicit association task, using HTML-formatted stimuli. |
iat‑image | The implicit association task, using images as stimuli. |
image‑button‑response | Display an image and allow the participant to respond by choosing a button to click. The button can be customized extensively, e.g., using images in place of standard buttons. |
image‑keyboard‑response | Display an image and allow the participant to respond by pressing a key. |
image‑slider‑response | Display an image and allow the participant to respond by moving a slider to indicate a value. |
initialize‑camera | Request permission to use the participant's camera to record video and allows the participant to choose which camera to use if multiple devices are enabled. Also allows setting the mime type of the recorded video. |
initialize‑microphone | Request permission to use the participant's microphone to record audio and allows the participant to choose which microphone to use if multiple devices are enabled. |
instructions | For displaying instructions to the participant. Allows the participant to navigate between pages of instructions using keys or buttons. |
maxdiff | Displays rows of alternatives to be selected for two mutually-exclusive categories, typically as 'most' or 'least' on a particular criteria (e.g. importance, preference, similarity). The participant responds by selecting one radio button corresponding to an alternative in both the left and right response columns. |
mirror‑camera | Shows a live feed of the participant's camera on the screen. |
preload | This plugin loads images, audio, and video files into the browser's memory before they are needed in the experiment, in order to improve stimulus and response timing, and to avoid disrupting the flow of the experiment. |
reconstruction | The participant interacts with a stimulus by modifying a parameter of the stimulus and observing the change in the stimulus in real-time. |
resize | Calibrate the display so that materials display with a known physical size. |
same‑different‑html | A same-different judgment task. An HTML-formatted stimulus is shown, followed by a brief gap, and then another stimulus is shown. The participant indicates whether the stimuli are the same or different. |
same‑different‑image | A same-different judgment task. An image is shown, followed by a brief gap, and then another stimulus is shown. The participant indicates whether the stimuli are the same or different. |
serial‑reaction‑time | A set of boxes are displayed on the screen and one of them changes color. The participant presses a key that corresponds to the different color box as fast as possible. |
serial‑reaction‑time‑mouse | A set of boxes are displayed on the screen and one of them changes color. The participants clicks the box that changed color as fast as possible. |
sketchpad | Creates an interactive canvas that the participant can draw on using their mouse or touchscreen. |
survey‑html‑form | Renders a custom HTML form. Allows for mixing multiple kinds of form input. |
survey‑likert | Displays likert-style questions. |
survey‑multi‑choice | Displays multiple choice questions with one answer allowed per question. |
survey‑multi‑select | Displays multiple choice questions with multiple answes allowed per question. |
survey‑text | Shows a prompt with a text box. The participant writes a response and then submits by clicking a button. |
video‑button‑response | Displays a video file with many options for customizing playback. participant responds to the video by pressing a button. |
video‑keyboard‑response | Displays a video file with many options for customizing playback. participant responds to the video by pressing a key. |
video‑slider‑response | Displays a video file with many options for customizing playback. participant responds to the video by moving a slider. |
virtual‑chinrest | An implementation of the "virutal chinrest" procedure developed by Li, Joo, Yeatman, and Reinecke (2020). Calibrates the monitor to display items at a known physical size by having participants scale an image to be the same size as a physical credit card. Then uses a blind spot task to estimate the distance between the participant and the display. |
visual‑search‑circle | A customizable visual-search task modelled after Wang, Cavanagh, & Green (1994). The participant indicates whether or not a target is present among a set of distractors. The stimuli are displayed in a circle, evenly-spaced, equidistant from a fixation point. |
webgazer‑calibrate | Calibrates the WebGazer extension for eye tracking. |
webgazer‑init‑camera | Initializes the camera and helps the participant center their face for eye tracking. |
webgazer‑validate | Performs validation to measure precision and accuracy of WebGazer eye tracking predictions. |