Manual Instruction For PsychoPy
Manual Instruction For PsychoPy
Python
Release 3.2.0
Jonathan Peirce
1 About PsychoPy 1
2 General issues 3
3 Installation 29
4 Getting Started 33
5 Builder 41
6 Coder 75
9 Troubleshooting 409
Index 449
i
ii
CHAPTER
ONE
ABOUT PSYCHOPY
If you use this software, please cite one of the publications that describe it. For most people the 2019 paper is
probably the most relevant (the papers from 2009, 2007 did not mention Builder at all, for instance).
• Peirce, J. W., Gray, J. R., Simpson, S., MacAskill, M. R., Höchenberger, R., Sogo, H., Kastman, E., Lindeløv,
J. (2019). PsychoPy2: experiments in behavior made easy. Behavior Research Methods. 10.3758/s13428-018-
01193-y
• Peirce, J. W., & MacAskill, M. R. (2018). Building Experiments in PsychoPy. London: Sage.
• Peirce J. W. (2009). Generating stimuli for neuroscience using PsychoPy. Frontiers in Neuroinformatics, 2 (10),
1-8. doi:10.3389/neuro.11.010.2008
• Peirce, J. W. (2007). PsychoPy - Psychophysics software in Python. Journal of Neuroscience Methods, 162
(1-2):8-13 doi:10.1016/j.jneumeth.2006.11.017
Citing these papers gives the reviewer/reader of your study information about how the system works and it attributes
some credit for its original creation. Academic assessment (whether for promotion or even getting appointed to a job in
the first place) prioritises publications over making useful tools for others. Citations provide a way for the developers
to justify their continued involvement in the development of the package.
1
PsychoPy - Psychology software for Python, Release 3.2.0
TWO
GENERAL ISSUES
These are issues that users should be aware of, whether they are using Builder or Coder views.
PsychoPy provides a simple and intuitive way for you to calibrate your monitor and provide other information about
it and then import that information into your experiment.
Information is inserted in the Monitor Center (Tools menu), which allows you to store information about multiple
monitors and keep track of multiple calibrations for the same monitor.
For experiments written in the Builder view, you can then import this information by simply specifying the name of
the monitor that you wish to use in the Experiment settings dialog. For experiments created as scripts you can retrieve
the information when creating the Window by simply naming the monitor that you created in Monitor Center. e.g.:
Of course, the name of the monitor in the script needs to match perfectly the name given in the Monitor Center.
One of the particular features of PsychoPy is that you can specify the size and location of stimuli in units that are
independent of your particular setup, such as degrees of visual angle (see Units for the window and stimuli). In order
for this to be possible you need to inform PsychoPy of some characteristics of your monitor. Your choice of units
determines the information you need to provide:
Units Requires
‘norm’ (normalised to width/height) n/a
‘pix’ (pixels) Screen width in pixels
‘cm’ (centimeters on the screen) Screen width in pixels and screen width in cm
‘deg’ (degrees of visual angle) Screen width (pixels), screen width (cm) and distance (cm)
PsychoPy can also store and use information about the gamma correction required for your monitor. If you have
a Spectrascan PR650 (other devices will hopefully be added) you can perform an automated calibration in which
PsychoPy will measure the necessary gamma value to be applied to your monitor. Alternatively this can be added
3
PsychoPy - Psychology software for Python, Release 3.2.0
manually into the grid to the right of the Monitor Center. To run a calibration, connect the PR650 via the serial port
and, immediately after turning it on press the Find PR650 button in the Monitor Center.
Note that, if you don’t have a photometer to hand then there is a method for determining the necessary gamma value
psychophysically included in PsychoPy (see gammaMotionNull and gammaMotionAnalysis in the demos menu).
The two additional tables in the Calibration box of the Monitor Center provide conversion from DKL and LMS colour
spaces to RGB.
One of the key advantages of PsychoPy over many other experiment-building software packages is that stimuli can be
described in a wide variety of real-world, device-independent units. In most other systems you provide the stimuli at
a fixed size and location in pixels, or percentage of the screen, and then have to calculate how many cm or degrees of
visual angle that was.
In PsychoPy, after providing information about your monitor, via the Monitor Center, you can simply specify your
stimulus in the unit of your choice and allow PsychoPy to calculate the appropriate pixel size for you.
Your choice of unit depends on the circumstances. For conducting demos, the two normalised units (‘norm’ and
‘height’) are often handy because the stimulus scales naturally with the window size. For running an experiment it’s
usually best to use something like ‘cm’ or ‘deg’ so that the stimulus is a fixed size irrespective of the monitor/window.
For all units, the centre of the screen is represented by coordinates (0,0), negative values mean down/left, positive
values mean up/right.
With ‘height’ units everything is specified relative to the height of the window (note the window, not the screen).
As a result, the dimensions of a screen with standard 4:3 aspect ratio will range (-0.6667,-0.5) in the bottom left to
(+0.6667,+0.5) in the top right. For a standard widescreen (16:10 aspect ratio) the bottom left of the screen is (-0.8,-
0.5) and top-right is (+0.8,+0.5). This type of unit can be useful in that it scales with window size, unlike Degrees of
visual angle or Centimeters on screen, but stimuli remain square, unlike Normalised units units. Obviously it has the
disadvantage that the location of the right and left edges of the screen have to be determined from a knowledge of the
screen dimensions. (These can be determined at any point by the Window.size attribute.)
Spatial frequency: cycles per stimulus (so will scale with the size of the stimulus).
Requires : No monitor information
In normalised (‘norm’) units the window ranges in both x and y from -1 to +1. That is, the top right of the window
has coordinates (1,1), the bottom left is (-1,-1). Note that, in this scheme, setting the height of the stimulus to be 1.0,
will make it half the height of the window, not the full height (because the window has a total height of 1:-1 = 2!).
Also note that specifying the width and height to be equal will not result in a square stimulus if your window is not
square - the image will have the same aspect ratio as your window. e.g. on a 1024x768 window the size=(0.75,1) will
be square.
Spatial frequency: cycles per stimulus (so will scale with the size of the stimulus).
Requires : No monitor information
Set the size and location of the stimulus in centimeters on the screen.
Spatial frequency: cycles per cm
Requires : information about the screen width in cm and size in pixels
Assumes : pixels are square. Can be verified by drawing a stimulus with matching width and height and verifying that
it is in fact square. For a CRT this can be controlled by setting the size of the viewable screen (settings on the monitor
itself).
Use degrees of visual angle to set the size and location of the stimulus. This is, of course, dependent on the distance
that the participant sits from the screen as well as the screen itself, so make sure that this is controlled, and remember
to change the setting in Monitor Center if the viewing distance changes.
Spatial frequency: cycles per degree
Requires : information about the screen width in cm and pixels and the viewing distance in cm
There are actually three variants: ‘deg’, ‘degFlat’, and ‘degFlatPos’
‘deg’ : Most people using degrees of visual angle choose to make the assumption that a degree of visual angle spans
the same number of pixels at all parts of the screen. This isn’t actually true for standard flat screens - a degree of visual
angle at the edge of the screen spans more pixels because it is further from the eye. For moderate eccentricities the
error is small (a 0.2% error in size calculation at 3 deg eccentricity) but grows as stimuli are placed further from the
centre of the screen (a 2% error at 10 deg). For most studies this form of calculation is preferred, as it does not result
in a warped appearance of visual stimuli, but if you need greater precision at far eccentricities then choose one of the
alternatives below.
‘degFlatPos’ : This accounts for flat screens in calculating position coordinates of visual stimuli but leaves size and
spatial frequency uncorrected. This means that an evenly spaced grid of visual stimuli will appear warped in position
but will
‘degFlat’: This corrects the calculations of degrees for flatness of the screen for each vertex of your stimuli. Square
stimuli in the periphery will, therefore, become more spaced apart but they will also get larger and rhomboid in the
pixels that they occupy.
You can also specify the size and location of your stimulus in pixels. Obviously this has the disadvantage that sizes
are specific to your monitor (because all monitors differ in pixel size).
Spatial frequency: `cycles per pixel` (this catches people out but is used to be in keeping with the other units.
If using pixels as your units you probably want a spatial frequency in the range 0.2-0.001 (i.e. from 1 cycle every 5
pixels to one every 100 pixels).
Requires : information about the size of the screen (not window) in pixels, although this can often be deduce from the
operating system if it has been set correctly there.
Assumes: nothing
The color of stimuli can be specified when creating a stimulus and when using setColor() in a variety of ways. There
are three basic color spaces that PsychoPy can use, RGB, DKL and LMS but colors can also be specified by a name
(e.g. ‘DarkSalmon’) or by a hexadecimal string (e.g. ‘#00FF00’).
examples:
Any of the web/X11 color names can be used to specify a color. These are then converted into RGB space by PsychoPy.
These are not case sensitive, but should not include any spaces.
This is really just another way of specifying the r,g,b values of a color, where each gun’s value is given by two
hexadecimal characters. For some examples see this chart. To use these in PsychoPy they should be formatted as a
string, beginning with # and with no spaces. (NB on a British Mac keyboard the # key is hidden - you need to press
Alt-3)
This is the simplest color space, in which colors are represented by a triplet of values that specify the red green and
blue intensities. These three values each range between -1 and 1.
Examples:
• [1,1,1] is white
• [0,0,0] is grey
• [-1,-1,-1] is black
• [1.0,-1,-1] is red
• [1.0,0.6,0.6] is pink
The reason that these colors are expressed ranging between 1 and -1 (rather than 0:1 or 0:255) is that many experiments,
particularly in visual science where PsychoPy has its roots, express colors as deviations from a grey screen. Under
that scheme a value of -1 is the maximum decrement from grey and +1 is the maximum increment above grey.
Note that PsychoPy will use your monitor calibration to linearize this for each gun. E.g., 0 will be halfway between
the minimum luminance and maximum luminance for each gun, if your monitor gammaGrid is set correctly.
Another way to specify colors is in terms of their Hue, Saturation and ‘Value’ (HSV). For a description of the color
space see the Wikipedia HSV entry. The Hue in this case is specified in degrees, the saturation ranging 0:1 and the
‘value’ also ranging 0:1.
Examples:
• [0,1,1] is red
• [0,0.5,1] is pink
• [90,1,1] is cyan
• [anything, 0, 1] is white
• [anything, 0, 0.5] is grey
• [anything, anything,0] is black
Note that colors specified in this space (like in RGB space) are not going to be the same another monitor; they are
device-specific. They simply specify the intensity of the 3 primaries of your monitor, but these differ between monitors.
As with the RGB space gamma correction is automatically applied if available.
To use DKL color space the monitor should be calibrated with an appropriate spectrophotometer, such as a PR650.
In the Derrington, Krauskopf and Lennie1 color space (based on the Macleod and Boynton2 chromaticity diagram)
colors are represented in a 3-dimensional space using spherical coordinates that specify the elevation from the isolu-
minant plane, the azimuth (the hue) and the contrast (as a fraction of the maximal modulations along the cardinal axes
of the space).
1 Derrington, A.M., Krauskopf, J., & Lennie, P. (1984). Chromatic Mechanisms in Lateral Geniculate Nucleus of Macaque. Journal of Physiol-
In PsychoPy these values are specified in units of degrees for elevation and azimuth and as a float (ranging -1:1) for
the contrast.
Note that not all colors that can be specified in DKL color space can be reproduced on a monitor. Here is a movie
plotting in DKL space (showing cartesian coordinates, not spherical coordinates) the gamut of colors available on an
example CRT monitor.
Examples:
• [90,0,1] is white (maximum elevation aligns the color with the luminance axis)
• [0,0,1] is an isoluminant stimulus, with azimuth 0 (S-axis)
• [0,45,1] is an isoluminant stimulus,with an oblique azimuth
To use LMS color space the monitor should be calibrated with an appropriate spectrophotometer, such as a PR650.
In this color space you can specify the relative strength of stimulation desired for each cone independently, each with
a value from -1:1. This is particularly useful for experiments that need to generate cone isolating stimuli (for which
modulation is only affecting a single cone type).
2.4 Preferences
The Preferences dialog allows to adjust general settings for different parts of PsychoPy. The preferences settings
are saved in the configuration file userPrefs.cfg. The labels in brackets for the different options below represent the
abbreviations used in the userPrefs.cfg file.
In rare cases, you might want to adjust the preferences on a per-experiment basis. See the API reference for the
Preferences class here.
These settings are common to all components of the application (Coder and Builder etc)
show start-up tips (showStartupTips): Display tips when starting PsychoPy.
large icons (largeIcons): Do you want large icons (on some versions of wx on macOS this has no effect)?
default view (defaultView): Determines which view(s) open when the PsychoPy app starts up. Default is ‘last’,
which fetches the same views as were open when PsychoPy last closed.
reset preferences (resetPrefs): Reset preferences to defaults on next restart of PsychoPy.
auto-save prefs (autoSavePrefs): Save any unsaved preferences before closing the window.
debug mode (debugMode): Enable features for debugging PsychoPy itself, including unit-tests.
locale (locale): Language to use in menus etc.; not all translations are available. Select a value, then restart the app.
Think about adding translations for your language.
reload previous exp (reloadPrevExp): Select whether to automatically reload a previously opened experiment at
start-up.
uncluttered namespace (unclutteredNamespace): If this option is selected, the scripts will use more complex code,
but the advantage is that there is less of a chance that name conflicts will arise.
components folders (componentsFolders): A list of folder path names that can hold additional custom components
for the Builder view; expects a comma-separated list.
hidden components (hiddenComponents): A list of components to hide (e.g., because you never use them)
unpacked demos dir (unpackedDemosDir): Location of Builder demos on this computer (after unpacking).
saved data folder (savedDataFolder): Name of the folder where subject data should be saved (relative to the script
location).
Flow at top (topFlow): If selected, the “Flow” section will be shown topmost and the “Components” section will be
on the left. Restart PsychoPy to activate this option.
always show readme (alwaysShowReadme): If selected, PsychoPy always shows the Readme file if you open an
experiment. The Readme file needs to be located in the same folder as the experiment file.
max favorites (maxFavorites): Upper limit on how many components can be in the Favorites menu of the Compo-
nents panel.
2.4. Preferences 9
PsychoPy - Psychology software for Python, Release 3.2.0
code font (codeFont): A list of font names to be used for code display. The first found on the system will be used.
comment font (commentFont): A list of font names to be used for comments sections. The first found on the system
will be used
output font (outputFont): A list of font names to be used in the output panel. The first found on the system will be
used.
code font size (codeFontSize): An integer between 6 and 24 that specifies the font size for code display in points.
output font size (outputFontSize): An integer between 6 and 24 that specifies the font size for output display in
points.
show source asst (showSourceAsst): Do you want to show the source assistant panel (to the right of the Coder view)?
On Windows this provides help about the current function if it can be found. On macOS the source assistant is
of limited use and is disabled by default.
show output (showOutput): Show the output panel in the Coder view. If shown all python output from the session
will be output to this panel. Otherwise it will be directed to the original location (typically the terminal window
that called PsychoPy application to open).
reload previous files (reloadPrevFiles): Should PsychoPy fetch the files that you previously had open when it
launches?
preferred shell (preferredShell): Specify which shell should be used for the coder shell window.
newline convention (newlineConvention): Specify which character sequence should be used to encode newlines in
code files: unix = n (line feed only), dos = rn (carriage return plus line feed).
window type (winType): PsychoPy can use one of two ‘backends’ for creating windows and drawing; pygame, pyglet
and glfw. Here you can set the default backend to be used.
units (units): Default units for windows and visual stimuli (‘deg’, ‘norm’, ‘cm’, ‘pix’). See Units for the window and
stimuli. Can be overridden by individual experiments.
full-screen (fullscr): Should windows be created full screen by default? Can be overridden by individual experiments.
allow GUI (allowGUI): When the window is created, should the frame of the window and the mouse pointer be
visible. If set to False then both will be hidden.
paths (paths): Paths for additional Python packages can be specified. See more information here.
audio library (audioLib): As explained in the Sound documentation, currently two sound libraries are available,
pygame and pyo.
audio driver (audioDriver): Also, different audio drivers are available.
flac audio compression (flac): Set flac audio compression.
parallel ports (parallelPorts): This list determines the addresses available in the drop-down menu for the Parallel
Port Out Component.
proxy (proxy): The proxy server used to connect to the internet if needed. Must be of the form
http://111.222.333.444:5555
auto-proxy (autoProxy): PsychoPy should try to deduce the proxy automatically. If this is True and autoProxy is
successful, then the above field should contain a valid proxy address.
allow usage stats (allowUsageStats): Allow PsychoPy to ping a website at when the application starts up. Please
leave this set to True. The info sent is simply a string that gives the date, PsychoPy version and platform info.
There is no cost to you: no data is sent that could identify you and PsychoPy will not be delayed in starting as a
result. The aim is simple: if we can show that lots of people are using PsychoPy there is a greater chance of it
being improved faster in the future.
check for updates (checkForUpdates): PsychoPy can (hopefully) automatically fetch and install updates. This will
only work for minor updates and is still in a very experimental state (as of v1.51.00).
timeout (timeout): Maximum time in seconds to wait for a connection response.
There are many shortcut keys that you can use in PsychoPy. For instance did you realise that you can indent or outdent
a block of code with Ctrl-[ and Ctrl-] ?
There are a number of different forms of output that PsychoPy can generate, depending on the study and your preferred
analysis software. Multiple file types can be output from a single experiment (e.g. Excel data file for a quick browse,
Log file to check for error messages and PsychoPy data file (.psydat) for detailed analysis)
Log files are actually rather difficult to use for data analysis but provide a chronological record of everything that
happened during your study. The level of content in them depends on you. See Logging data for further information.
This is actually a TrialHandler or StairHandler object that has been saved to disk with the python cPickle
module.
These files are designed to be used by experienced users with previous experience of python and, probably, matplotlib.
The contents of the file can be explored with dir(), as any other python object.
These files are ideal for batch analysis with a python script and plotting via matplotlib. They contain more information
than the Excel or csv data files, and can even be used to (re)create those files.
Of particular interest might be the attributes of the Handler:
extraInfo the extraInfo dictionary provided to the Handler during its creation
trialList the list of dictionaries provided to the Handler during its creation
data a dictionary of 2D numpy arrays. Each entry in the dictionary represents a type of data (e.g.
if you added ‘rt’ data during your experiment using ~psychopy.data.TrialHandler.addData then
‘rt’ will be a key). For each of those entries the 2D array represents the condition number and
repeat number (remember that these start at 0 in python, unlike Matlab(TM) which starts at 1)
For example, to open a psydat file and examine some of its contents with:
Ideally, we should provide a demo script here for fetching and plotting some data (feel free to contribute).
This form of data file is the default data output from Builder experiments as of v1.74.00. Rather than summarising
data in a spreadsheet where one row represents all the data from a single condition (as in the summarised data format),
in long-wide data files the data is not collapsed by condition, but written chronologically with one row representing
one trial (hence it is typically longer than summarised data files). One column in this format is used for every single
piece of information available in the experiment, even where that information might be considered redundant (hence
the format is also ‘wide’).
Although these data files might not be quite as easy to read quickly by the experimenter, they are ideal for import and
analysis under packages such as R, SPSS or Matlab.
Excel 2007 files (.xlsx) are a useful and flexible way to output data as a spreadsheet. The file format is open and
supported by nearly all spreadsheet applications (including older versions of Excel and also OpenOffice). N.B. because
.xlsx files are widely supported, the older Excel file format (.xls) is not likely to be supported by PsychoPy unless a
user contributes the code to the project.
Data from PsychoPy are output as a table, with a header row. Each row represents one condition (trial type) as given
to the TrialHandler. Each column represents a different type of data as given in the header. For some data, where
there are multiple columns for a single entry in the header. This indicates multiple trials. For example, with a standard
data file in which response time has been collected as ‘rt’ there will be a heading rt_raw with several columns, one for
each trial that occurred for the various trial types, and also an rt_mean heading with just a single column giving the
mean reaction time for each condition.
If you’re creating experiments by writing scripts then you can specify the sheet name as well as file name for Excel file
outputs. This way you can store multiple sessions for a single subject (use the subject as the filename and a date-stamp
as the sheetname) or a single file for multiple subjects (give the experiment name as the filename and the participant
as the sheetname).
Builder experiments use the participant name as the file name and then create a sheet in the Excel file for each loop of
the experiment. e.g. you could have a set of practice trials in a loop, followed by a set of main trials, and these would
each receive their own sheet in the data file.
For maximum compatibility, especially for legacy analysis software, you can choose to output your data as a delimited
text file. Typically this would be comma-separated values (.csv file) or tab-delimited (.tsv file). The format of those
files is exactly the same as the Excel file, but is limited by the file format to a single sheet.
Monitors typically don’t have linear outputs; when you request luminance level of 127, it is not exactly half the
luminance of value 254. For experiments that require the luminance values to be linear, a correction needs to be put
in place for this nonlinearity which typically involves fitting a power law or gamma (𝛾) function to the monitor output
values. This process is often referred to as gamma correction.
PsychoPy can help you perform gamma correction on your monitor, especially if you have one of the supported
photometers/spectroradiometers.
There are various different equations with which to perform gamma correction. The simple equation (2.1) is assumed
by most hardware manufacturers and gives a reasonable first approximation to a linear correction. The full gamma
correction equation (2.3) is more general, and likely more accurate especially where the lowest luminance value of the
monitor is bright, but also requires more information. It can only be used in labs that do have access to a photometer
or similar device.
The simple form of correction (as used by most hardware and software) is this:
𝐿(𝑉 ) = 𝑎 + 𝑘𝑉 𝛾 (2.1)
where 𝐿 is the final luminance value, 𝑉 is the requested intensity (ranging 0 to 1), 𝑎, 𝑘 and 𝛾 are constants for the
monitor.
This equation assumes that the luminance where the monitor is set to ‘black’ (V=0) comes entirely from the surround
and is therefore not subject to the same nonlinearity as the monitor. If the monitor itself contributes significantly to 𝑎
then the function may not fit very well and the correction will be poor.
The advantage of this function is that the calibrating system (PsychoPy in this case) does not need to know anything
more about the monitor than the gamma value itself (for each gun). For the full gamma equation (2.3), the system
needs to know about several additional variables. The look-up table (LUT) values required to give a (roughly) linear
luminance output can be generated by:
𝐿𝑈 𝑇 (𝑉 ) = 𝑉 1/𝛾 (2.2)
For very accurate gamma correction PsychoPy uses a more general form of the equation above, which can separate
the contribution of the monitor and the background to the lowest luminance level:
𝐿(𝑉 ) = 𝑎 + (𝑏 + 𝑘𝑉 )𝛾 (2.3)
This equation makes no assumption about the origin of the base luminance value, but requires that the system knows
the values of 𝑏 and 𝑘 as well as 𝛾.
The inverse values, required to build the LUT are found by:
((1 − 𝑉 )𝑏𝛾 + 𝑉 (𝑏 + 𝑘)𝛾 )1/𝛾 − 𝑏 (2.4)
𝐿𝑈 𝑇 (𝑉 ) =
𝑘
This is derived below, for the interested reader. ;-)
And the associated luminance values for each point in the LUT are given by:
The difficulty with the full gamma equation (2.3) is that the presence of the 𝑏 value complicates the issue of calculating
the inverse values for the LUT. The simple inverse of (2.3) as a function of output luminance values is:
𝑚 = 𝑎 + 𝑏𝛾
(2.6)
𝑀 = 𝑎 + (𝑏 + 𝑘)𝛾
Thus, the luminance value, L at any given point in the LUT, V, is given by
𝐿(𝑉 ) = 𝑚 + (𝑀 − 𝑚)𝑉
= 𝑎 + 𝑏𝛾 + (𝑎 + (𝑏 + 𝑘)𝛾 − 𝑎 − 𝑏𝛾 )𝑉
(2.7)
= 𝑎 + 𝑏𝛾 + ((𝑏 + 𝑘)𝛾 − 𝑏𝛾 )𝑉
= 𝑎 + (1 − 𝑉 )𝑏𝛾 + 𝑉 (𝑏 + 𝑘)𝛾
(𝐿 − 𝑎)1/𝛾 − 𝑏 (2.8)
𝐿𝑈 𝑇 (𝐿) =
𝑘
and substitute our 𝐿(𝑉 ) values from (2.7):
2.6.4 References
All rendering performed by PsychoPy uses hardware-accelerated OpenGL rendering where possible. This means that,
as much as possible, the necessary processing to calculate pixel values is performed by the graphics card GPU rather
than by the CPU. For example, when an image is rotated the calculations to determine what pixel values should result,
and any interpolation that is needed, are determined by the graphics card automatically.
In the double-buffered system, stimuli are initially drawn into a piece of memory on the graphics card called the ‘back
buffer’, while the screen presents the ‘front buffer’. The back buffer initially starts blank (all pixels are set to the
window’s defined color) and as stimuli are ‘rendered’ they are gradually added to this back buffer. The way in which
stimuli are combined according to transparency rules is determined by the blend mode of the window. At some point
in time, when we have rendered to this buffer all the objects that we wish to be presented, the buffers are ‘flipped’ such
that the stimuli we have been drawing are presented simultaneously. The monitor updates at a very precise fixed rate
and the flipping of the window will be synchronised to this monitor update if possible (see Sync to VBL and wait for
VBL).
Each update of the window is referred to as a ‘frame’ and this ultimately determines the temporal resolution with
which stimuli can be presented (you cannot present your stimulus for any duration other than a multiple of the frame
duration). In addition to synchronising flips to the frame refresh rate, PsychoPy can optionally go a further step of not
allowing the code to continue until a screen flip has occurred on the screen, which is useful in ascertaining exactly
when the frame refresh occurred (and, thus, when your stimulus actually appeared to the subject). These timestamps
are very precise on most computers. For further information about synchronising and waiting for the refresh see Sync
to VBL and wait for VBL.
If the code/processing required to render all you stimuli to the screen takes longer to complete than one screen refresh
then you will ‘drop/skip a frame’. In this case the previous frame will be left on screen for a further frame period
and the flip will only take effect on the following screen update. As a result, time-consuming operations such as disk
accesses or execution of many lines of code, should be avoided while stimuli are being dynamically updated (if you
care about the precise timing of your stimuli). For further information see the sections on Detecting dropped frames
and Reducing dropped frames.
The fact that modern graphics processors are extremely powerful; they can carry out a great deal of processing from
a very small number of commands. Consider, for instance, the PsychoPy Coder demo elementArrayStim in which
several hundred Gabor patches are updated frame by frame. The graphics card has to blend a sinusoidal grating with
a grey background, using a Gaussian profile, several hundred times each at a different orientation and location and it
does this in less than one screen refresh on a good graphics card.
There are three things that are relatively slow and should be avoided at critical points in time (e.g. when rendering a
dynamic or brief stimulus). These are:
1. disk accesses
2. passing large amounts of data to the graphics card
3. making large numbers of python calls.
Functions that are very fast:
1. Calls that move, resize, rotate your stimuli are likely to carry almost no overhead
2. Calls that alter the color, contrast or opacity of your stimulus will also have no overhead IF your graphics card
supports OpenGL Shaders
3. Updating of stimulus parameters for psychopy.visual.ElementArrayStim is also surprisingly fast BUT you
should try to update your stimuli using numpy arrays for the maths rather than for. . . loops
Notable slow functions in PsychoPy calls:
1. Calls to set the image or set the mask of a stimulus. This involves having to transfer large amounts of data
between the computer’s main processor and the graphics card, which is a relatively time-consuming process.
2. Any of your own code that uses a Python for. . . loop is likely to be slow if you have a large number of cycles
through the loop. Try to ‘vectorise’ your code using a numpy array instead.
1. Keep images as small as possible. This is meant in terms of number of pixels, not in terms of Mb on your disk.
Reducing the size of the image on your disk might have been achieved by image compression such as using
jpeg images but these introduce artefacts and do nothing to reduce the problem of send large amounts of data
from the CPU to the graphics card. Keep in mind the size that the image will appear on your monitor and how
many pixels it will occupy there. If you took your photo using a 10 megapixel camera that means the image is
represented by 30 million numbers (a red, green and blue) but your computer monitor will have, at most, around
2 megapixels (1960x1080).
2. Try to use square powers of two for your image sizes. This is efficient because computer memory is organised
according to powers of two (did you notice how often numbers like 128, 512, 1024 seem to come up when
you buy your computer?). Also several mathematical routines (anything involving Fourier maths, which is
used a lot in graphics processing) are faster with power-of-two sequences. For the psychopy.visual.
GratingStim a texture/mask of this size is required and if you don’t provide one then your texture will be
‘upsampled’ to the next larger square-power-of-2, so you can save this interpolation step by providing it in the
right shape initially.
3. Get a faster graphics card. Upgrading to a more recent card will cost around £30. If you’re currently using an
integrated Intel graphics chip then almost any graphics card will be an advantage. Try to get an nVidia or an
ATI Radeon card.
You may have heard mention of ‘shaders’ on the users mailing list and wondered what that meant (or maybe you didn’t
wonder at all and just went for a donut!). OpenGL shader programs allow modern graphics cards to make changes to
things during the rendering process (i.e. while the image is being drawn). To use this you need a graphics card that
supports OpenGL 2.1 and PsychoPy will only make use of shaders if a specific OpenGL extension that allows floating
point textures is also supported. Nowadays nearly all graphics cards support these features - even Intel chips from
Intel!
One example of how such shaders are used is the way that PsychoPy colors greyscale images. If you provide a
greyscale image as a 128x128 pixel texture and set its color to be red then, without shaders, PsychoPy needs to create
a texture that contains the 3x128x128 values where each of the 3 planes is scaled according to the RGB values you
require. If you change the color of the stimulus a new texture has to be generated with the new weightings for the
3 planes. However, with a shader program, that final step of scaling the texture value according to the appropriate
RGB value can be done by the graphics card. That means we can upload just the 128x128 texture (taking 1/3 as much
time to upload to the graphics card) and then we each time we change the color of the stimulus we just a new RGB
triplet (only 3 numbers) without having to recalculate the texture. As a result, on graphics cards that support shaders,
changing colors, contrasts and opacities etc. has almost zero overhead.
A ‘blend function’ determines how the values of new pixels being drawn should be combined with existing pixels in
the ‘frame buffer’.
blendMode = ‘avg’
This mode is exactly akin to the real-world scenario of objects with varying degrees of transparency being placed
in front of each other; increasingly transparent objects allow increasing amounts of the underlying stimuli to show
through. Opaque stimuli will simply occlude previously drawn objects. With each increasing semi-transparent object
to be added, the visibility of the first object becomes increasingly weak. The order in which stimuli are rendered is
very important since it determines the ordering of the layers. Mathematically, each pixel colour is constructed from
opacity*stimRGB + (1-opacity)*backgroundRGB. This was the only mode available before PsychoPy version 1.80
and remains the default for the sake of backwards compatibility.
blendMode = ‘add’
If the window blendMode is set to ‘add’ then the value of the new stimulus does not in any way replace that of the
existing stimuli that have been drawn; it is added to it. In this case the value of opacity still affects the weighting of
the new stimulus being drawn but the first stimulus to be drawn is never ‘occluded’ as such. The sum is performed
using the signed values of the color representation in PsychoPy, with the mean grey being represented by zero. So a
dark patch added to a dark background will get even darker. For grating stimuli this means that contrast is summed
correctly.
This blend mode is ideal if you want to test, for example, the way that subjects perceive the sum of two potentially
overlapping stimuli. It is also needed for rendering stereo/dichoptic stimuli to be viewed through colored anaglyph
glasses.
If stimuli are combined in such a way that an impossible luminance value is requested of any of the monitor guns then
that pixel will be out of bounds. In this case the pixel can either be clipped to provide the nearest possible colour, or
can be artificially colored with noise, highlighting the problem if the user would prefer to know that this has happened.
PsychoPy will always, if the graphics card allows it, synchronise the flipping of the window with the vertical blank
interval (VBL aka VBI) of the screen. This prevents visual artefacts such as ‘tearing’ of moving stimuli. This does
not, itself, indicate that the script also waits for the physical frame flip to occur before continuing. If the waitBlanking
window argument is set to False then, although the window refreshes themselves will only occur in sync with the
screen VBL, the win.flip() call will not actually wait for this to occur, such that preparations can continue immediately
for the next frame. For rendering purposes this is actually optimal and will reduce the likelihood of frames being
dropped during rendering.
By default the PsychoPy Window will also wait for the VBL (waitBlanking=True) . Although this is slightly less
efficient for rendering purposes it is necessary if we need to know exactly when a frame flip occurred (e.g. to timestamp
when the stimulus was physically presented). On most systems this will provide a very accurate measure of when the
stimulus was presented (with a variance typically well below 1ms but this should be tested on your system).
2.8 Projects
As of version 1.84 PsychoPy connects directly with the Open Science Framework website (http://OSF.io) allowing
you to search for existing projects and upload your own experiments and data.
There are several reasons you may want to do this:
• sharing files with collaborators
• sharing files with the rest of the scientific community
• maintaining historical evidence of your work
• providing yourself with a simple version control across your different machines
You may find it simple to share files with your collaborators using dropbox but that means your data are stored by
a commercial company over which you have no control and with no interest in scientific integrity. Check with your
ethics committee how they feel about your data (e.g. personal details of participants?) being stored on dropbox. OSF,
by comparison, is designed for scientists to stored their data securely and forever.
Once you’ve created a project on OSF you can add other contributors to it and when they log in via PsychoPy they
will see the projects they share with you (as well as the project they have created themselves). Then they can sync
with that project just like any other.
2.8. Projects 17
PsychoPy - Psychology software for Python, Release 3.2.0
Optionally, you can make your project (or subsets of it) publicly accessible so that others can view the files. This has
various advantages, to the scientific field but also to you as a scientist.
Good for open science:
• Sharing your work allows scientists to work out why one experiment gave a different result to another;
there are often subtleties in the exact construction of a study that didn’t get described fully in the methods
section. By sharing the actual experiment, rather than just a description of it, we can reduce the failings of
replications
• Sharing your work helps others get up and running quickly. That’s good for the scientific community. We
want science to progress faster and with fewer mistakes.
Some people feel that, having put in all that work to create their study, it would be giving up their advantage to let
others simply use their work. Luckily, sharing is good for you as a scientist as well!
Good for the scientist:
• When you create a study you want others to base their work on yours (we call that academic impact)
• By giving people the exact materials from your work you increase the chance that they will work on your
topic and base their next study on something of yours
• By making your project publicly available on OSF (or other sharing repository) you raise visibility of your
work
You don’t need to decide to share immediately. Probably you want your work to be private until the experiment is
complete and the paper is under review (or has been accepted even). That’s fine. You can create your project and keep
it private between you and your collaborators and then share it at a later date with the click of a button.
In many areas of science researchers are very careful about maintaining a full documented history of what their work; what the
• you can “preregister” your plans for the next experiment (so that people can’t later accuse you of “p-
hacking”).
• all your files are timestamped so you can prove to others that they were collected on/by a certain date,
removing any potential doubts about who collected data first
• your projects (and individual files) have a unique URL on OSF so you can cite/reference resources.
Additionally, “Registrations” (snapshots of your project at a fixed point in time) can be given a DOI, which
guarantees they will exist permanently
PsychoPy doesn’t currently have the facility to create user profiles or projects, so the first step is for you to do that
yourself.
Login to OSF
From the Projects menu you can log in to OSF with your username and password (this is never stored; see Security).
This user will stay logged in while the PsychoPy application remains open, or until you switch to a different user. If
you select “Remember me” then your login will be stored and you can log in again without typing your password each
time.
Projects that you have previously synchronised will try to use the stored details of the known users if possible and will
revert to username and password if not. Project files (defining the details of the project to sync) can be stored wherever
you choose; either in a private or shared location. User details are stored in the home space of the user currently logged
in to the operating system so are not shared with other users by default.
Security
When you log in with your username and password these details are not stored by PsychoPy in any way. They are sent
immediately to OSF using a secure (https) connection. OSF sends back an “authorisation token” identifying you as a
valid user with authorised credentials. This is stored locally for future log in attempts. By visiting your user profile
at http://OSF.io you can see what applications/computers have retrieved authorisation tokens for your account (and
revoke them if you choose).
The auth token is stored in plain text on your computer, but a malicious attacker with access to your computer could
only use this to log in to OSF.io. They could not use it to work out your password.
All files are sent by secure connection (https) to the server.
Having logged in to OSF from the projects menu you can search for projects to work with using the >Projects>Search
menu. This brings up a view that shows you all the current projects for the logged in user (owned or shared) and allows
you to search for public projects using tags and/or words in the title.
When you select a project, either in your own projects or in the search box, then the details for that project come up
on the right hand side, including a link to visit the project page on the web site.
On the web page for the project you can “fork” the project to your own username and then you can use PsychoPy to
download/update/sync files with that project, just as with any other project. The project retains information about its
history; the project from which it was forked gets its due credit.
Synchronizing projects
Having found your project online you can then synchronize a local folder with that set of files.
To do this the first time:
• select one of your projects in the project search window so the details appear on the right
• press the “Sync. . . ” button
• the Project Sync dialog box will appear
• set the location/name for a project file, which will store information about the state of files on the last sync
• set the location of the (root) folder locally that you want to be synchronised with the remote files
• press sync
The sync process and rules:
• on the first synchronisation all the files/folders will be merged: - the contents of the local folder will be
uploaded to the server and vice versa - files that have the same name but different contents (irrespective of
dates) will be flagged as conflicting (see below) and both copies kept
2.8. Projects 19
PsychoPy - Psychology software for Python, Release 3.2.0
• on subsequent sync operations a two-way sync will be performed taking into account the previous state. If
you delete the files locally and then sync then they will be deleted remotely as well
• files that are the same (according to an md5 checksum) and have the same location will be left as they are
• if a file is in conflict (it has been changed in both locations since the last sync) then both versions will be
kept and will be tagged as conflicting
• if a file is deleted in one location but is also changed in the other (since the last sync) then it will be
recreated on the side where it was deleted with the state of the side where is was not deleted.
Conflicting files will be labelled with their original filename plus the string “_CONFLICT<datetimestamp>” Deletion
conflicts will be labelled with their original filename plus the string “_DELETED”
Limitations
• PsychoPy does not directly allow you to create a new project yet, nor create a user account. To start with you
need to go to http://osf.io to create your username and/or project. You also cannot currently fork public projects
to your own user space yet from within PsychoPy. If you find a project that is useful to you then fork it from the
website (the link is available in the details panel of the project search window)
• The synchronisation routines are fairly basic right now and will not cater for all possible eventualities. For
example, if you create a file locally but your colleague created a folder with the same name and synced that with
the server, it isn’t clear what will (or should ideally) happen when you now sync your project. You should be
careful with this tool and always back up your data by an independent means in case damage to your files is
caused
• This functionality is new and may well have bugs. User beware!
One of the key requirements of experimental control software is that it has good temporal precision. PsychoPy aims to
be as precise as possible in this domain and can achieve excellent results depending on your experiment and hardware.
It also provides you with a precise log file of your experiment to allow you to check the precision with which things
occurred. Some general considerations are discussed here and there are links with Specific considerations for specific
designs.
Something that people seem to forget (not helped by the software manufacturers that keep talking about their sub-
millisecond precision) is that the monitor, keyboard and human participant DO NOT have anything like this sort of
precision. Your monitor updates every 10-20ms depending on frame rate. If you use a CRT screen then the top is
drawn before the bottom of the screen by several ms. If you use an LCD screen the whole screen can take around
20ms to switch from one image to the next. Your keyboard has a latency of 4-30ms, depending on brand and system.
So, yes, PsychoPy’s temporal precision is as good as most other equivalent applications, for instance the duration for
which stimuli are presented can be synchronised precisely to the frame, but the overall accuracy is likely to be severely
limited by your experimental hardware. To get very precise timing of responses etc., you need to use specialised
hardware like button boxes and you need to think carefully about the physics of your monitor.
Warning: The information about timing in PsychoPy assumes that your graphics card is capable of synchronising
with the monitor frame rate. For integrated Intel graphics chips (e.g. GMA 945) under Windows, this is not true
and the use of those chips is not recommended for serious experimental use as a result. Desktop systems can have
a moderate graphics card added for around £30 which will be vastly superior in performance.
For most behavioural/psychophysics studies timing is most simply controlled by setting some timer (e.g. a Clock())
to zero and waiting until it has reached a certain value before ending the trial. We might call this a ‘relative’ timing
method, because everything is timed from the start of the trial/epoch. In reality this will cause an overshoot of some
fraction of one screen refresh period (10ms, say). For imaging (EEG/MEG/fMRI) studies adding 10ms to each trial
repeatedly for 10 minutes will become a problem, however. After 100 stimulus presentations your stimulus and scanner
will be de-synchronised by 1 second.
There are two ways to get around this:
1. Time by frames If you are confident that you aren’t dropping frames then you could base your timing on frames
instead to avoid the problem.
2. Non-slip (global) clock timing The other way, which for imaging is probably the most sensible, is to arrange
timing based on a global clock rather than on a relative timing method. At the start of each trial you add the
(known) duration that the trial will last to a global timer and then wait until that timer reaches the necessary
value. To facilitate this, the PsychoPy Clock() was given a new add() method as of version 1.74.00 and a
CountdownTimer() was also added.
The non-slip method can only be used in cases where the trial is of a known duration at its start. It cannot, for example,
be used if the trial ends when the subject makes a response, as would occur in most behavioural studies.
The key sometimes is knowing if you are dropping frames. PsychoPy can help with that by keeping track of frame
durations. By default, frame time tracking is turned off because many people don’t need it, but it can be turned on any
time after Window creation:
Since there are often dropped frames just after the system is initialised, it makes sense to start off with a fixation period,
or a ready message and don’t start recording frame times until that has ended. Obviously if you aren’t refreshing the
window at some point (e.g. waiting for a key press with an unchanging screen) then you should turn off the recording
of frame times or it will give spurious results.
The simplest way to check if a frame has been dropped is to get PsychoPy to report a warning if it thinks a frame was
dropped:
win.recordFrameIntervals = True
# Set the log module to report warnings to the standard output window
# (default is errors only).
logging.console.setLevel(logging.WARNING)
While recording frame times, these are simply appended, every frame to win.frameIntervals (a list). You can simply
plot these at the end of your script using matplotlib:
Or you could save them to disk. A convenience function is provided for this:
win.saveFrameIntervals(fileName=None, clear=True)
The above will save the currently stored frame intervals (using the default filename, ‘lastFrameIntervals.log’) and then
clears the data. The saved file is
a simple text file.
At any time you can also retrieve the time of the /last/ frame flip using win.lastFrameT (the time is synchronised with
logging.defaultClock so it will match any logging commands that your script uses).
As of version 1.62 PsychoPy ‘blocks’ on the vertical blank interval meaning that, once Window.flip() has been called,
no code will be executed until that flip actually takes place. The timestamp for the above frame interval measure-
ments is taken immediately after the flip occurs. Run the timeByFrames demo in Coder to see the precision of these
measurements on your system. They should be within 1ms of your mean frame interval.
Note that Intel integrated graphics chips (e.g. GMA 945) under win32 do not sync to the screen at all and so blocking
on those machines is not possible.
There are many things that can affect the speed at which drawing is achieved on your computer. These include, but are
probably not limited to; your graphics card, CPU, operating system, running programs, stimuli, and your code itself.
Of these, the CPU and the OS appear to make rather little difference. To determine whether you are actually dropping
frames see Detecting dropped frames.
1. make sure you have a good graphics card. Avoid integrated graphics chips, especially Intel integrated chips and
especially on laptops (because on these you don’t get to change your mind so easily later). In particular, try to
make sure that your card supports OpenGL 2.0
2. shut down as many programs, including background processes. Although modern processors are fast and often have mult
1. run in full-screen mode (rather than simply filling the screen with your window). This way the OS doesn’t have
to spend time working out what application is currently getting keyboard/mouse events.
2. don’t generate your stimuli when you need them. Generate them in advance and then just modify them later
with the methods like setContrast(), setOrientation() etc. . .
3. calls to the following functions are comparatively slow; they require more CPU time than most other functions and then h
(a) GratingStim.setTexture()
(b) RadialStim.setTexture()
(c) TextStim.setText()
4. if you don’t have OpenGL 2.0 then calls to setContrast, setRGB and setOpacity will also be slow, because they
also make a call to setTexture(). If you have shader support then this call is not necessary and a large speed
increase will result.
5. avoid loops in your python code (use numpy arrays to do maths with lots of elements)
6. if you need to create a large number (e.g. greater than 10) similar stimuli, then try the ElementArrayStim
It isn’t clear that these actually make a difference, but they might).
1. disconnect the internet cable (to prevent programs performing auto-updates?)
2. on Macs you can actually shut down the Finder. It might help. See Alex Holcombe’s page here
3. use a single screen rather than two (probably there is some graphics card overhead in managing double the
number of pixels?)
This is an attempt to quantify the ability of PsychoPy draw without dropping frames on a variety of hardware/software.
The following tests were conducted using the script at the bottom of the page. Note, of course that the hardware fully
differs between the Mac and Linux/Windows systems below, but that both are standard off-the-shelf machines.
All of the below tests were conducted with ‘normal’ systems rather than anything that had been specifically optimised:
The simple answer is ‘yes’, given some additional hardware. The clocks that PsychoPy uses do have sub-millisecond
precision but your keyboard has a latency of 4-25ms depending on your platform and keyboard. You could buy a
response pad (e.g. a Cedrus Response Pad ) and use PsychoPy’s serial port commands to retrieve information about
responses and timing with a precision of around 1ms.
Before conducting your experiment in which effects might be on the order of 1 ms, do consider that;
• your screen has a temporal resolution of ~10 ms
• your visual system has a similar upper limit (or you would notice the flickering screen)
• human response times are typically in the range 200-400 ms and very variable
• USB keyboard latencies are variable, in the range 20-30ms
That said, PsychoPy does aim to give you as high a temporal precision as possible, and is likely not to be the limiting
factor of your experiment.
Computer monitors
Monitors have fixed refresh rates, typically 60 Hz for a flat-panel display, higher for a CRT (85-100 Hz are common,
up to 200 Hz is possible). For a refresh rate of 85 Hz there is a gap of 11.7 ms between frames and this limits the
timing of stimulus presentation. You cannot have your stimulus appear for 100 ms, for instance; on an 85Hz monitor
it can appear for either 94 ms (8 frames) or 105 ms (9 frames). There are further, less obvious, limitations however.
For ‘’CRT (cathode ray tube) screens’‘, the lines of pixels are drawn sequentially from the top to the bottom and once
the bottom line has been drawn the screen is finished and the line returns to the top (the Vertical Blank Interval, VBI).
Most of your frame interval is spent drawing the lines with 1-2ms being left for the VBI. This means that the pixels
at the bottom are drawn ‘’‘up to 10 ms later’‘’ than the pixels at the top of the screen. At what point are you going
to say your stimulus ‘appeared’ to the participant? For flat panel displays, or (or LCD projectors) your image will be
presented simultaneously all over the screen, but it takes up to 20 ms(!!) for your pixels to go all the way from black
to white (manufacturers of these panels quote values of 3 ms for the fastest panels, but they certainly don’t mean 3 ms
white-to-black, I assume they mean 3 ms half-life).
Fig. 1: Figure 1: photodiode trace at top of screen. The image above shows the luminance trace of a CRT recorded
by a fast photo-sensitive diode at the top of the screen when a stimulus is requested (shown by the square wave). The
square wave at the bottom is from a parallel port that indicates when the stimulus was flipped to the screen. Note that
on a CRT the screen at any point is actually black for the majority of the time and just briefly bright. The visual system
integrates over a large enough time window not to notice this. On the next frame after the stimulus ‘presentation time’
the luminance of the screen flash increased.
Warning: If you’re using a regular computer display, you have a hardware-limited temporal precision of 10 ms
irrespective of your response box or software clocks etc. . . and should bear that in mind when looking for effect
sizes of less than that.
Yes. Generally to do that you should time your stimulus (its onset/offset, its rate of change. . . ) using the frame refresh
rather than a clock. e.g. you should write your code to say ‘for 20 frames present this stimulus’ rather than ‘for
300ms present this stimulus’. Provided your graphics card is set to synchronise page-flips with the vertical blank, and
provided that you aren’t dropping frames the frame rate will always be absolutely constant.
Fig. 2: Figure 2: photodiode trace of the same large stimulus at bottom of screen. The image above shows comes from
exactly the same script as the above but the photodiode is positioned at the bottom of the screen. In this case, after the
stimulus is ‘requested’ the current frame (which is dark) finishes drawing and then, 10ms later than the above image,
the screen goes bright at the bottom.
2.10 Glossary
Adaptive staircase An experimental method whereby the choice of stimulus parameters is not pre-determined but
based on previous responses. For example, the difficulty of a task might be varied trial-to-trial based on the
participant’s responses. These are often used to find psychophysical thresholds. Contrast this with the method
of constants.
CPU Central Processing Unit is the main processor of your computer. This has a lot to do, so we try to minimise the
amount of processing that is needed, especially during a trial, when time is tight to get the stimulus presented
on every screen refresh.
CRT Cathode Ray Tube ‘Traditional’ computer monitor (rather than an LCD or plasma flat screen).
csv Comma-Separated Value files Type of basic text file with ‘comma-separated values’. This type of file can be
opened with most spreadsheet packages (e.g. MS Excel) for easy reading and manipulation.
GPU Graphics Processing Unit is the processor on your graphics card. The GPUs of modern computers are incred-
ibly powerful and it is by allowing the GPU to do a lot of the work of rendering that PsychoPy is able to achieve
good timing precision despite being written in an interpreted language
Method of constants An experimental method whereby the parameters controlling trials are predetermined at the
beginning of the experiment, rather than determined on each trial. For example, a stimulus may be presented for
3 pre-determined time periods (100, 200, 300ms) on different trials, and then repeated a number of times. The
order of presentation of the different conditions can be randomised or sequential (in a fixed order). Contrast this
method with the adaptive staircase.
VBI (Vertical Blank Interval, aka the Vertical Retrace, or Vertical Blank, VBL). The period in-between video frames
and can be used for synchronising purposes. On a CRT display the screen is black during the VBI and the display
beam is returned to the top of the display.
VBI blocking The setting whereby all functions are synced to the VBI. After a call to psychopy.visual.
Window.flip() nothing else occurs until the VBI has occurred. This is optimal and allows very precise
timing, because as soon as the flip has occurred a very precise time interval is known to have occurred.
VBI syncing (aka vsync) The setting whereby the video drawing commands are synced to the VBI. When psy-
chopy.visual.Window.flip() is called, the current back buffer (where drawing commands are being executed)
will be held and drawn on the next VBI. This does not necessarily entail VBI blocking (because the system may
return and continue executing commands) but does guarantee a fixed interval between frames being drawn.
xlsx Excel OpenXML file format. A spreadsheet data format developed by Microsoft but with an open (published)
format. This is the native file format for Excel (2007 or later) and can be opened by most modern spreadsheet
applications including OpenOffice (3.0+), google docs, Apple iWork 08.
THREE
INSTALLATION
3.1 Download
For the easiest installation download and install the Standalone package.
For all versions see the PsychoPy releases on github
See below for options if you don’t want to use the Standalone releases:
• pip install
• Linux
• Anaconda and Miniconda
• Developers install
Now that most python libraries can be install using pip it’s relatively easy to manually install PsychoPy and all it’s
dependencies to your own installation of Python.
The steps are to fetch Python. This method should work on any version of Python but we recommend Python 3.6 for
now.
You can install PsychoPy and its dependencies (more than you’ll strictly need) by:
If you prefer not to install all the dependencies then you could do:
3.2.2 Linux
There used to be neurodebian and Gentoo packages for PsychoPy but these are both badly outdated. We’d recommend
you do:
29
PsychoPy - Psychology software for Python, Release 3.2.0
wxPython>4.0 and doesn’t have universal wheels yet which is why you have to find and install the correct wheel for
your particular flavor of linux.
Building Python PsychToolbox bindings:
The PsychToolbox bindings for Python provide superior timing for sounds and keyboard responses. Unfortunately we
haven’t bee able to build universal wheels for these yet so you may have to build the pkg yourself. That should be
hard. You need the necessary dev libraries installed first:
and then you should be able to install using pip and it will build the extensions as needed:
pip install psychtoolbox
Ensure you have Python 3.6 and the latest version of pip installed:
python --version
pip --version
Next, follow instructions here to fork and fetch the latest version of the PsychoPy repository.
From the directory where you cloned the latest PsychoPy repository (i.e., where setup.py resides), run:
pip install -e .
This will install all PsychoPy dependencies to your default Python distribution (which should be Python 3.6). Next,
you should create a new PsychoPy shortcut linking your newly installed dependencies to your current version of
PsychoPy in the cloned repository. To do this, simply create a new .BAT file containing:
30 Chapter 3. Installation
PsychoPy - Psychology software for Python, Release 3.2.0
"C:\PATH_TO_PYTHON3.6\python.exe C:\PATH_TO_CLONED_PSYCHOPY_
˓→REPO\psychopy\app\psychopyApp.py"
Alternatively, you can run the psychopyApp.py from the command line:
python C:\PATH_TO_CLONED_PSYCHOPY_REPO\psychopy\app\psychopyApp
The minimum requirement for PsychoPy is a computer with a graphics card that supports OpenGL. Many newer
graphics cards will work well. Ideally the graphics card should support OpenGL version 2.0 or higher. Certain visual
functions run much faster if OpenGL 2.0 is available, and some require it (e.g. ElementArrayStim).
If you already have a computer, you can install PsychoPy and the Configuration Wizard will auto-detect the card and
drivers, and provide more information. It is inexpensive to upgrade most desktop computers to an adequate graphics
card. High-end graphics cards can be very expensive but are only needed for very intensive use.
Generally NVIDIA and ATI (AMD) graphics chips have higher performance than Intel graphics chips so try and get
one of those instead.
On Windows, if you get an error saying “pyglet.gl.ContextException: Unable to share contexts” then the most
likely cause is that you need OpenGL drivers and your built-in Windows only has limited support for OpenGL (or
possibly you have an Intel graphics card that isn’t very good). Try installing new drivers for your graphics card
from its manufacturer’s web page, not from Microsoft. For example, NVIDIA provides drivers for its cards here:
https://www.nvidia.com/Download/index.aspx
32 Chapter 3. Installation
CHAPTER
FOUR
GETTING STARTED
As an application, PsychoPy has two main views: the Builder view, and the Coder view. It also has a underlying API
that you can call directly.
1. Builder. You can generate a wide range of experiments easily from the Builder using its intuitive, graphical
user interface (GUI). This might be all you ever need to do. But you can always compile your experiment
into a python script for fine-tuning, and this is a quick way for experienced programmers to explore some of
PsychoPy’s libraries and conventions.
1. Coder. For those comfortable with programming, the Coder view provides a basic code editor with syntax
highlighting, code folding, and so on. Importantly, it has its own output window and Demo menu. The demos
illustrate how to do specific tasks or use specific features; they are not whole experiments. The Coder tutorials
should help get you going, and the API reference will give you the details.
The Builder and Coder views are the two main aspects of the PsychoPy application. If you’ve installed the StandAlone
version of PsychoPy on MS Windows then there should be an obvious link to PsychoPy in your > Start > Programs.
If you installed the StandAlone version on macOS then the application is where you put it (!). On these two platforms
you can open the Builder and Coder views from the View menu and the default view can be set from the preferences.
On Linux, you can start PsychoPy from a command line, or make a launch icon (which can depend on the desktop
and distro). If the PsychoPy app is started with flags —-coder (or -c), or —-builder (or -b), then the preferences will
be overridden and that view will be created as the app opens.
For experienced python programmers, it’s possible to use PsychoPy without ever opening the Builder or Coder. Install
the PsychoPy libraries and dependencies, and use your favorite IDE instead of the Coder.
33
PsychoPy - Psychology software for Python, Release 3.2.0
4.1 Builder
When learning a new computer language, the classic first program is simply to print or display “Hello world!”. Lets
do it.
4.1. Builder 35
PsychoPy - Psychology software for Python, Release 3.2.0
Assuming you typed in “Hello world!”, your screen should have looked like this (briefly):
If nothing happens or it looks wrong, recheck all the steps above; be sure to start from a new Builder view.
What if you wanted to display your cheerful greeting for longer than the default time?
• Click on your Text component (the existing one, not a new one).
• Edit the Stop duration (s) to be 3.2; times are in seconds.
• Click OK.
• And finally Run.
When running an experiment, you can quit by pressing the escape key (this can be configured or disabled). You can
quit PsychoPy from the File menu, or typing Ctrl-Q / Cmd-Q.
To do more, you can try things out and see what happens. You may want to consult the Builder documentation. Many
people find it helpful to explore the Builder demos, in part to see what is possible, and especially to see how different
things are done.
A good way to develop your own first PsychoPy experiment is to base it on the Builder demo that seems closest. Copy
it, and then adapt it step by step to become more and more like the program you have in mind. Being familiar with the
Builder demos can only help this process.
You could stop here, and just use the Builder for creating your experiments. It provides a lot of the key features that
people need to run a wide variety of studies. But it does have its limitations. When you want to have more complex
designs or features, you’ll want to investigate the Coder. As a segue to the Coder, lets start from the Builder, and see
how Builder programs work.
4.2 Builder-to-coder
Whenever you run a Builder experiment, PsychoPy will first translate it into python code, and then execute that code.
To get a better feel for what was happening “behind the scenes” in the Builder program above:
• In the Builder, load or recreate your “hello world” program.
• Instead of running the program, explicitly convert it into python: Type F5, or click the Compile icon:
4.2. Builder-to-coder 37
PsychoPy - Psychology software for Python, Release 3.2.0
The view will automatically switch to the Coder, and display the python code. If you then save and run this code, it
would look the same as running it directly from the Builder.
It is always possible to go from the Builder to python code in this way. You can then edit that code and run it as a
python program. However, you cannot go from code back to a Builder representation.
To switch quickly between Builder and Coder views, you can type Ctrl-L / Cmd-L.
4.3 Coder
Being able to inspect Builder-generated code is nice, but it’s possible to write code yourself, directly. With the Coder
and various libraries, you can do virtually anything that your computer is capable of doing, using a full-featured
modern programming language (python).
For variety, lets say hello to the Spanish-speaking world. PsychoPy knows Unicode (UTF-8).
If you are not in the Coder, switch to it now.
• Start a new code document: Ctrl-N / Cmd-N.
• Type (or copy & paste) the following:
win = visual.Window()
msg = visual.TextStim(win, text=u"\u00A1Hola mundo!")
msg.draw()
win.flip()
core.wait(1)
win.close()
You can do more complex things, such as type in each line from the Coder example directly into the Shell window,
doing so line by line:
and then:
and so on. This lets you try things out and see what happens line-by-line (which is how python goes through your
program).
4.3. Coder 39
PsychoPy - Psychology software for Python, Release 3.2.0
FIVE
BUILDER
Note: The Builder view is now (at version 1.75) fairly well-developed and should be able to construct a wide variety
of studies. But you should still check carefully that the stimuli and response collection are as expected.
Contents:
The Builder view of the PsychoPy application is designed to allow the rapid development of a wide range of experi-
ments for experimental psychology and cognitive neuroscience experiments.
41
PsychoPy - Psychology software for Python, Release 3.2.0
The Builder view comprises two main panels for viewing the experiment’s Routines (upper left) and another for
viewing the Flow (lower part of the window).
An experiment can have any number of Routines, describing the timing of stimuli, instructions and responses. These
are portrayed in a simple track-based view, similar to that of video-editing software, which allows stimuli to come on
go off repeatedly and to overlap with each other.
The way in which these Routines are combined and/or repeated is controlled by the Flow panel. All experiments
have exactly one Flow. This takes the form of a standard flowchart allowing a sequence of routines to occur one after
another, and for loops to be inserted around one or more of the Routines. The loop also controls variables that change
between repetitions, such as stimulus attributes.
For a simple reaction time experiment there might be 3 Routines, one that presents instructions and waits for a keypress,
one that controls the trial timing, and one that thanks the participant at the end. These could then be combined in the
Flow so that the instructions come first, followed by trial, followed by the thanks Routine, and a loop could be inserted
so that the Routine repeated 4 times for each of 6 stimulus intensities.
Many fMRI experiments present a sequence of stimuli in a block. For this there are multiple ways to create the experiment:
• We could create a single Routine that contained a number of stimuli and presented them sequentially,
followed by a long blank period to give the inter-epoch interval, and surround this single Routine by a loop
to control the blocks.
• Alternatively we could create a pair of Routines to allow presentation of a) a single stimulus (for 1 sec)
and b) a blank screen, for the prolonged period. With these Routines we could insert pair of loops, one to
repeat the stimulus Routine with different images, followed by the blank Routine, and another to surround
this whole set and control the blocks.
5.1.4 Demos
There are a couple of demos included with the package, that you can find in their own special menu. When you load
these the first thing to do is make sure the experiment settings specify the same resolution as your monitor, otherwise
the screen can appear off-centred and strangely scaled.
Stroop demo
This runs a digital demonstration of the Stroop effect1 . The experiment presents a series of coloured words written in
coloured ‘inks’. Subjects have to report the colour of the letters for each word, but find it harder to do so when the
letters are spelling out a different (incongruous) colour. Reaction times for the congruent trials (where letter colour
matches the written word) are faster than for the incongruent trials.
From this demo you should note:
• How to setup a trial list in a .csv or .xlsx file
• How to record key presses and reaction times (using the resp Component in trial Routine)
1 Stroop, J.R. (1935). “Studies of interference in serial verbal reactions”. Journal of Experimental Psychology 18: 643-662.
42 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
• How to change a stimulus parameter on each repetition of the loop. The text and rgb values of the word
Component are based on thisTrial, which represents a single iteration of the trials loop. They have been
set to change every repeat (don’t forget that step!)
• How to present instructions: just have a long-lasting TextStim and then force end of the Routine when a
key is pressed (but don’t bother storing the key press).
This is a mini psychophysics experiment, designed to find the contrast detection threshold of a gabor i.e. find the
contrast where the observer can just see the stimulus.
From this demo you should note:
• The opening dialog box requires the participant to enter the orientation of the stimulus, the required fields
here are determined by ‘Experiment Info’ in ‘Preferences’ which is a python dictionary. This information
is then entered into the stimulus parameters using ‘$expInfo[‘ori’]’
• The phase of the stimulus is set to change every frame and its value is determined by the value of tri-
alClock.getTime()*2. Every Routine has a clock associated with it that gets reset at the beginning of the
iteration through the Routine. There is also a globalClock that can be used in the same way. The phase
of a Patch Component ranges 0-1 (and wraps to that range if beyond it). The result in this case is that the
grating drifts at a rate of 2Hz.
• The contrast of the stimulus is determined using an adaptive staircase. The Staircase methods are different
to those used for a loop which uses predetermined values. An important thing to note is that you must
define the correct answer.
5.2 Routines
An experiment consists of one or more Routines. A Routine might specify the timing of events within a trial or the
presentation of instructions or feedback. Multiple Routines can then be combined in the Flow, which controls the
order in which these occur and the way in which they repeat.
To create a new Routine, use the Experiment menu. The display size of items within a routine can be adjusted (see the
View menu).
Within a Routine there are a number of components. These components determine the occurrence of a stimulus, or the
recording of a response. Any number of components can be added to a Routine. Each has its own line in the Routine
view that shows when the component starts and finishes in time, and these can overlap.
For now the time axis of the Routines panel is fixed, representing seconds (one line is one second). This will hopefully
change in the future so that units can also be number of frames (more precise) and can be scaled up or down to allow
very long or very short Routines to be viewed easily. That’s on the wishlist. . .
5.3 Flow
In the Flow panel a number of Routines can be combined to form an experiment. For instance, your study may have a
Routine that presented initial instructions and waited for a key to be pressed, followed by a Routine that presented one
trial which should be repeated 5 times with various different parameters set. All of this is achieved in the Flow panel.
You can adjust the display size of the Flow panel (see View menu).
5.2. Routines 43
PsychoPy - Psychology software for Python, Release 3.2.0
The Routines that the Flow will use should be generated first (although their contents can be added or altered at any
time). To insert a Routine into the Flow click the appropriate button in the left of the Flow panel or use the Experiment
menu. A dialog box will appear asking which of your Routines you wish to add. To select the location move the mouse
to the section of the flow where you wish to add it and click on the black disk.
5.3.2 Loops
Loops control the repetition of Routines and the choice of stimulus parameters for each. PsychoPy can generate the
next trial based on the method of constants or using an adaptive staircase. To insert a loop use the button on the left of
the Flow panel, or the item in the Experiment menu of the Builder. The start and end of a loop is set in the same way
as the location of a Routine (see above). Loops can encompass one or more Routines and other loops (i.e. they can be
nested).
As with components in Routines, the loop must be given a name, which must be unique and made up of only alpha-
numeric characters (underscores are allowed). I would normally use a plural name, since the loop represents multiple
repeats of something. For example, trials, blocks or epochs would be good names for your loops.
It is usually best to use trial information that is contained in an external file (.xlsx or .csv). When inserting a loop into
the flow you can browse to find the file you wish to use for this. An example of this kind of file can be found in the
Stroop demo (trialTypes.xlsx). The column names are turned into variables (in this case text, letterColor, corrAns and
congruent), these can be used to define parameters in the loop by putting a $ sign before them e.g. $text.
As the column names from the input file are used in this way they must have legal variable names i.e. they must be
unique, have no punctuation or spaces (underscores are ok) and must not start with a digit.
The parameter Is trials exists because some loops are not there to indicate trials per se but a set of stimuli within a
trial, or a set of blocks. In these cases we don’t want the data file to add an extra line with each pass around the loop.
This parameter can be unchecked to improve (hopefully) your data file outputs. [Added in v1.81.00]
Method of Constants
Selecting a loop type of random, sequential, or fullRandom will result in a method of constants experiment, whereby
the types of trials that can occur are predetermined. That is, the trials cannot vary depending on how the subject has
responded on a previous trial. In this case, a file must be provided that describes the parameters for the repeats. This
should be an Excel 2007 (xlsx) file or a comma-separated-value (csv ) file in which columns refer to parameters that are
needed to describe stimuli etc. and rows one for each type of trial. These can easily be generated from a spreadsheet
package like Excel. (Note that csv files can also be generated using most text editors, as long as they allow you to
save the file as “plain text”; other output formats will not work, including “rich text”.) The top row should be a row of
headers: text labels describing the contents of the respective columns. (Headers must also not include spaces or other
characters other than letters, numbers or underscores and must not be the same as any variable names used elsewhere
in your experiment.) For example, a file containing the following table:
ori text corrAns
0 aaa left
90 aaa left
0 bbb right
90 bbb right
would represent 4 different conditions (or trial types, one per line). The header line describes the parameters in the 3
columns: ori, text and corrAns. It’s really useful to include a column called corrAns that shows what the correct key
press is going to be for this trial (if there is one).
If the loop type is sequential then, on each iteration through the Routines, the next row will be selected in the order
listed in the file. Under a random order, the next row will be selected at random (without replacement); it can only be
44 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
selected again after all the other rows have also been selected. nReps determines how many repeats will be performed
(for all conditions). The total number of trials will be the number of conditions (= number of rows in the file, not
counting the header row) times the number of repetitions, nReps. With the fullRandom option, the entire list of trials
including repetitions is used in random order, allowing the same item to appear potentially many times in a row, and
to repeat without necessarily having done all of the other trials. For example, with 3 repetitions, a file of trial types
like this:
letter
a
b
c
could result in the following possible sequences. sequential could only ever give one sequence with this order: [a b c
a b c a b c]. random will give one of 216 different orders (= 3! * 3! * 3! = nReps * (nTrials!) ), for example: [b a c a
b c c a b]. Here the letters are effectively in sets of (abc) (abc) (abc), and randomization is only done within each set,
ensuring (for example) that there are at least two a’s before the subject sees a 3rd b. Finally, fullRandom will return
one of 362,880 different orders (= 9! = (nReps * nTrials)! ), such as [b b c a a c c a b], which random never would.
There are no longer mini-blocks or “sets of trials” within the longer run. This means that, by chance, it would also be
possible to get a very un-random-looking sequence like [a a a b b b c c c].
It is possible to achieve any sequence you like, subject to any constraints that are logically possible. To do so, in the
file you specify every trial in the desired order, and the for the loop select sequential order and nReps=1.
In the standard Method of Constants you would use all the rows/conditions within your conditions file. However there
are often times when you want to select a subset of your trials before randomising and repeating.
The parameter Select rows allows this. You can specify which rows you want to use by inserting values here:
• 0,2,5 gives the 1st, 3rd and 5th entry of a list - Python starts with index zero)
• random(4)*10 gives 4 indices from 0 to 10 (so selects 4 out of 11 conditions)
• 5:10 selects the 6th to 9th rows
• $myIndices uses a variable that you’ve already created
Note in the last case that 5:8 isn’t valid syntax for a variable so you cannot do:
myIndices = 5:8
Note that PsychoPy uses Python’s built-in slicing syntax (where the first index is zero and the last entry of a slice
doesn’t get included). You might want to check the outputs of your selection in the Python shell (bottom of the Coder
view) like this:
Check that the conditions you wanted to select are the ones you intended!
5.3. Flow 45
PsychoPy - Psychology software for Python, Release 3.2.0
Staircase methods
The loop type staircase allows the implementation of adaptive methods. That is, aspects of a trial can depend on (or
“adapt to”) how a subject has responded earlier in the study. This could be, for example, simple up-down staircases
where an intensity value is varied trial-by-trial according to certain parameters, or a stop-signal paradigm to assess
impulsivity. For this type of loop a ‘correct answer’ must be provided from something like a Keyboard Component.
Various parameters for the staircase can be set to govern how many trials will be conducted and how many correct or
incorrect answers make the staircase go up or down.
The parameters from your loops are accessible to any component enclosed within that loop. The simplest (and default)
way to address these variables is simply to call them by the name of the parameter, prepended with $ to indicate that
this is the name of a variable. For example, if your Flow contains a loop with the above table as its input trial types
file then you could give one of your stimuli an orientation $ori which would depend on the current trial type being
presented. Example scenarios:
1. You want to loop randomly over some conditions in a loop called trials. Your conditions are stored in a csv file
with headings ‘ori’, ‘text’, ‘corrAns’ which you provide to this loop. You can then access these values from any
component using $ori, $text, and $corrAns
2. You create a random loop called blocks and give it an Excel file with a single column called movieName listing
filenames to be played. On each repeat you can access this with $movieName
3. You create a staircase loop called stairs. On each trial you can access the current value in the staircase with
$thisStair
Note: When you set a component to use a parameter that will change (e.g on each repeat through the loop) you should
remember to change the component parameter from ‘constant‘ to ‘set every repeat‘ or ‘set every frame‘ or it
won’t have any effect!
The downside of the above approach is that the names of trial parameters must be different between every loop, as
well as not matching any of the predefined names in python, numpy and PsychoPy. For example, the stimulus called
movie cannot use a parameter also called movie (so you need to call it movieName). An alternative method can be used
without these restrictions. If you set the Builder preference unclutteredNamespace to True you can then access the
variables by referring to parameter as an attribute of the singular name of the loop prepended with this. For example,
if you have a loop called trials which has the above file attached to it, then you can access the stimulus ori with
$thisTrial.ori. If you have a loop called blocks you could use $thisBlock.corrAns.
Now, although the name of the loop must still be valid and unique, the names of the parameters of the file do not have
the same requirements (they must still not contain spaces or punctuation characters).
Many people ask how to create blocks of trials, how to randomise them, and how to counterbalance their order. This
isn’t all that hard, although it does require a bit of thinking!
46 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
5.4.1 Blocking
The key thing to understand is that you should not create different Routines for different trials in your blocks (if at
all possible). Try to define your trials with a single Routine. For instance, let’s imagine you’re trying to create an
experiment that presents a block of pictures of houses or a block of faces. It would be tempting to create a Routine
called presentFace and another called presentHouse but you actually want just one called presentStim (or just trial)
and then set that to differ as needed across different stimuli.
This example is included in the Builder demos, as of PsychoPy 1.85.
You can add a loop around your trials, as normal, to control the trials within a block (e.g. randomly selecting a number
of images) but then you will have a second loop around this to define how the blocks change. You can also have
additional Routines like something to inform participants that the next block is about to start.
So, how do you get the block to change from one set of images to another? To do this create three spreadsheets, one
for each block, determining the filenames within that block, and then another to control which block is being used:
• facesBlock.xlsx
• housesBlock.xlsx
• chooseBlocks.xlsx
Setting up the basic conditions. The facesBlock, and housesBlock, files look more like your usual conditions files.
In this example we can just use a variable stimFile with values like stims/face01.jpg and stims/face02.jpg while the
housesBlock file has stims/house01.jpg and stims/house02.jpg. In a real experiment you’d probably also have response
keys andsuchlike as well.
So, how to switch between these files? That’s the trick and that’s what the other file is used for. In the choose-
Blocks.xlsx file you set up a variable called something like condsFile and that has values of facesBlock.xlsx and hous-
esBlock.xlsx. In the outer (blocks) loop you set up the conditions file to be chooseBlocks.xlsx which creates a variable
condsFile. Then, in the inner (trials) loop you set the conditions file not to be any file directly but simply $condsFile.
Now, when PsychoPy starts this loop it will find the current value of condsFile and insert the appropriate thing, which
will be the name of an conditions file and we’re away!
Your chooseBlocks.xlsx can contain other values as well, such as useful identifiers. In this demo you could add a value
readyText that says “Ready for some houses”, and “Ready for some faces” and use this in your get ready Routine.
Variables that are defined in the loops are available anywhere within those. In this case, of course, the values in the
outer loop are changing less often than the values in the inner loop.
5.4.2 Counterbalancing
Counterbalancing is simply an extension of blocking. Usually with a block design you would set the order of blocks
to be set randomly. In the example above the blocks are set to occur randomly, but note that they could also be set to
occur more than once if you want 2 repeats of the 2 blocks for a total of 4.
In a counterbalanced design you want to control the order explicitly and you want to provide a different order for
different groups of participants. Maybe group A always gets faces first, then houses, and group B always gets houses
first, then faces.
Now we need to create further conditions files, to specify the exact orders we want, so we’d have something like
groupA.xlsx:
condsFile
housesBlock.xlsx
facesBlock.xlsx
and groupB.xlsx:
condsFile
facesBlock.xlsx
housesBlock.xlsx
In this case the last part of the puzzle is how to assign participants to groups. For this you could write a Code
Component that would generate a variable for you (if. . . ..: groupFile = “groupB.xlsx”) but the easiest thing is probably
that you, the experimenter, chooses this outside of PsychoPy and simply tells PsychoPy which group to assign to each
participant.
The easiest way to do that is to add the field group to the initial dialog box, maybe with the default value of A. If
you set the conditions file for the blocks loop to be ` $"group"+expInfo['group']+".xlsx" ` then this
variable will be used from the dialog box to create the filename for the blocks file and you.
Also, if you’re doing this, remember to set the blocks loop to use “sequential” rather than “random” sorting. Your
inner loop still probably wants to be random (to shuffle the image order within a block) but your outer loop should
now be using exactly the order that you specified in the blocks condition file.
5.5 Components
Routines in the Builder contain any number of components, which typically define the parameters of a stimulus or an
input/output device.
The following components are available, as at version 1.65, but further components will be added in the future includ-
ing Parallel/Serial ports and other visual stimuli (e.g. GeometricStim).
This component can be used to filter the visual display, as if the subject is looking at it through an opening. Currently
only circular apertures are supported. Moreover, only one aperture is enabled at a time. You can’t “double up”: a
second aperture takes precedence.
name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
start [float or integer] The time that the aperture should start having its effect. See Defining the onset/duration of
components for details.
stop : When the aperture stops having its effect. See Defining the onset/duration of components for details.
pos [[X,Y]] The position of the centre of the aperture, in the units specified by the stimulus or window.
size [integer] The size controls how big the aperture will be, in pixels, default = 120
units [pix] What units to use (currently only pix).
48 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
See also:
API reference for Aperture
Properties
Name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
Start [int, float] The time that the stimulus should first appear.
Stop [int, float] Governs the duration for which the stimulus is presented.
line settings: Control color and width of the line. The line width is always specified in pixels - it does not honour the
units parameter.
opacity : Vary the transparency, from 0.0 = invisible to 1.0 = opaque
See also:
API reference for Brush
This component allows you to connect to a Cedrus Button Box to collect key presses.
Note that there is a limitation currently that a button box can only be used in a single Routine. Otherwise PsychoPy
tries to initialise it twice which raises an error. As a workaround, you need to insert the start-routine and each-frame
code from the button box into a code component for a second routine.
Properties
Name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
Start : The time that the button box is first read. See Defining the onset/duration of components for details.
Stop : Governs the duration for which the button box is first read. See Defining the onset/duration of components for
details.
Force end of Routine [true/false] If this is checked, the first response will end the routine.
Allowed keys [None, or an integer, list, or tuple of integers 0-7] This field lets you specify which buttons (None, or
some or all of 0 through 7) to listen to.
Store [(choice of: first, last, all, nothing)] Which button events to save in the data file. Events and the response times
are saved, with RT being recorded by the button box (not by PsychoPy).
Store correct [true/false] If selected, a correctness value will be saved in the data file, based on a match with the given
correct answer.
Correct answer: button The correct answer, used by Store correct.
Discard previous [true/false] If selected, any previous responses will be ignored (typically this is what you want).
5.5. Components 49
PsychoPy - Psychology software for Python, Release 3.2.0
Advanced
Device number: integer This is only needed if you have multiple Cedrus devices connected and you need to specify
which to use.
Use box timer [true/false] Set this to True to use the button box timer for timing information (may give better time
resolution)
See also:
API reference for iolab
The Code Component can be used to insert short pieces of python code into your experiments. This might be create a
variable that you want for another Component, to manipulate images before displaying them, to interact with hardware
for which there isn’t yet a pre-packaged component in PsychoPy (e.g. writing code to interact with the serial/parallel
ports). See code uses below.
Be aware that the code for each of the components in your Routine are executed in the order they appear on the Routine
display (from top to bottom). If you want your Code Component to alter a variable to be used by another component
immediately, then it needs to be above that component in the view. You may want the code not to take effect until next
frame however, in which case put it at the bottom of the Routine. You can move Components up and down the Routine
by right-clicking on their icons.
Within your code you can use other variables and modules from the script. For example, all routines have a stopwatch-style Clo
currentT = trialClock.getTime()
To see what other variables you might want to use, and also what terms you need to avoid in your chunks of code,
compile your script before inserting the code object and take a look at the contents of that script.
Note that this page is concerned with Code Components specifically, and not all cases in which you might use python
syntax within the Builder. It is also possible to put code into a non-code input field (such as the duration or text of
a Text Component). The syntax there is slightly different (requiring a $ to trigger the special handling, or \$ to avoid
triggering special handling). The syntax to use within a Code Component is always regular python syntax.
Parameters
The parameters of the Code Component simply specify the code that will get executed at 5 different points within the
experiment. You can use as many or as few of these as you need for any Code Component:
Begin Experiment: Things that need to be done just once, like importing a supporting module, initialis-
ing a variable for later use.
Begin Routine: Certain things might need to be done just once at the start of a Routine e.g. at the
beginning of each trial you might decide which side a stimulus will appear
Each Frame: Things that need to updated constantly, throughout the experiment. Note that these will
be executed exactly once per video frame (on the order of every 10ms), to give dynamic displays.
Static displays do not need to be updated every frame.
End Routine: At the end of the Routine (e.g. the trial) you may need to do additional things, like check-
ing if the participant got the right answer
End Experiment: Use this for things like saving data to disk, presenting a graph(?), or resetting hardware
to its original state.
50 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
There are many ways to do this, but you could add the following to the Begin Routine section of a Code Component at
the top of your Routine. Then set your stimulus position to be $targetPos and set the correct answer field of a Keyboard
Component to be $corrAns (set both of these to update on every repeat of the Routine).:
if random()>0.5:
targetPos=[-2.0, 0.0]#on the left
corrAns='left'
else:
targetPos=[+2.0, 0.0]#on the right
corrAns='right'
As with the above there are many different ways to create noise, but a simple method would be to add the following to
the Begin Routine section of a Code Component at the top of your Routine. Then set the image as $noiseTexture.:
Make a new routine, and place it at the end of the flow (i.e., the end of the experiment). Create a Code Component
with this in the Begin Experiment field:
expClock = core.Clock()
msg = "Thanks for participating - that took %.2f minutes in total" %(expClock.
˓→getTime()/60.0)
Next, add a Text Component to the routine, and set the text to $msg. Be sure that the text field’s updating is set to “Set
every repeat” (and not “Constant”).
Code components can also be used to control the end of a loop. See examples in Recipes:builderTerminateLoops.
The most complete way to find this out for your particular script is to compile it and take a look at what’s in there.
Below are some options that appear in nearly all scripts. Remember that those variables are Python objects and can
have attributes of their own. You can find out about those attributes using:
dir(myObject)
5.5. Components 51
PsychoPy - Psychology software for Python, Release 3.2.0
• expInfo: This is a Python Dictionary containing the information from the starting dialog box. e.g. That generally
includes the ‘participant’ identifier. You can access that in your experiment using exp[‘participant’]
• t: the current time (in seconds) measured from the start of this Routine
• frameN: the number of /completed/ frames since the start of the Routine (=0 in the first frame)
• win: the Window that the experiment is using
Your own variables:
• anything you’ve created in a Code Component is available for the rest of the script. (Sometimes you might need
to define it at the beginning of the experiment, so that it will be available throughout.)
• the name of any other stimulus or the parameters from your file also exist as variables.
• most Components have a status attribute, which is useful to determine whether a stimulus has NOT_STARTED,
STARTED or FINISHED. For example, to play a tone at the end of a Movie Component (of unknown duration)
you could set start of your tone to have the ‘condition’
myMovieName.status==FINISHED
Selected contents of the numpy library and numpy.random are imported by default. The entire numpy library is
imported as np, so you can use a several hundred maths functions by prepending things with ‘np.’:
• random() , randint() , normal() , shuffle() options for creating arrays of random numbers.
• sin(), cos(), tan(), and pi: For geometry and trig. By default angles are in radians, if you want the cosine of
an angle specified in degrees use cos(angle*180/pi), or use numpy’s conversion functions, rad2deg(angle) and
deg2rad(angle).
• linspace(): Create an array of linearly spaced values.
• log(), log10(): The natural and base-10 log functions, respectively. (It is a lowercase-L in log).
• sum(), len(): For the sum and length of a list or array. To find an average, it is better to use average() (due to the
potential for integer division issues with sum()/len() ).
• average(), sqrt(), std(): For average (mean), square root, and standard deviation, respectively. Note: Be sure
that the numpy standard deviation formula is the one you want!
• np.______: Many math-related features are available through the complete numpy libraries, which are available
within psychopy builder scripts as ‘np.’. For example, you could use np.hanning(3) or np.random.poisson(10,
10) in a code component.
The Dots Component allows you to present a Random Dot Kinematogram (RDK) to the participant of your study.
These are fields of dots that drift in different directions and subjects are typically required to identify the ‘global
motion’ of the field.
There are many ways to define the motion of the signal and noise dots. In PsychoPy the way the dots are configured
follows Scase, Braddick & Raymond (1996). Although Scase et al (1996) show that the choice of algorithm for your
dots actually makes relatively little difference there are some potential gotchas. Think carefully about whether each
of these will affect your particular case:
• limited dot lifetimes: as your dots drift in one direction they go off the edge of the stimulus and are replaced
randomly in the stimulus field. This could lead to a higher density of dots in the direction of motion providing
subjects with an alternative cue to direction. Keeping dot lives relatively short prevents this.
52 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
• noiseDots=’direction’: some groups have used noise dots that appear in a random location on each frame
(noiseDots=’location’). This has the disadvantage that the noise dots not only have a random direction but also
a random speed (whereas signal dots have a constant speed and constant direction)
• signalDots=’same’: on each frame the dots constituting the signal could be the same as on the previous frame or
different. If ‘different’, participants could follow a single dot for a long time and calculate its average direction
of motion to get the ‘global’ direction, because the dots would sometimes take a random direction and sometimes
take the signal direction.
As a result of these, the defaults for PsychoPy are to have signalDots that are from a ‘different’ population, noise dots
that have random ‘direction’ and a dot life of 3 frames.
Parameters
name : Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers
and underscores (no punctuation marks or spaces).
start : The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop : Governs the duration for which the stimulus is presented. See Defining the onset/duration of components for
details.
units [None, ‘norm’, ‘cm’, ‘deg’ or ‘pix’] If None then the current units of the Window will be used. See Units for
the window and stimuli for explanation of other options.
nDots [int] number of dots to be generated
fieldPos [(x,y) or [x,y]] specifying the location of the centre of the stimulus.
fieldSize [a single value, specifying the diameter of the field] Sizes can be negative and can extend beyond the window.
fieldShape : Defines the shape of the field in which the dots appear. For a circular field the nDots represents the
average number of dots per frame, but on each frame this may vary a little.
dotSize Always specified in pixels
dotLife [int] Number of frames each dot lives for (-1=infinite)
dir [float (degrees)] Direction of the signal dots
speed [float] Speed of the dots (in units per frame)
signalDots : If ‘same’ then the signal and noise dots are constant. If different then the choice of which is signal and
which is noise gets randomised on each frame. This corresponds to Scase et al’s (1996) categories of RDK.
noiseDots [‘direction’, ‘position’ or ‘walk’] Determines the behaviour of the noise dots, taken directly from Scase
et al’s (1996) categories. For ‘position’, noise dots take a random position every frame. For ‘direction’ noise
dots follow a random, but constant direction. For ‘walk’ noise dots vary their direction every frame, but keep a
constant speed.
See also:
API reference for DotStim
The Form component enables Psychopy to be used as a questionnaire tool, where participants can be presented with a
series of questions requiring responses. Form items, defined as questions and response pairs, are presented simultane-
ously onscreen with a scrollable viewing window.
5.5. Components 53
PsychoPy - Psychology software for Python, Release 3.2.0
Properties
Name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
Start [int, float] The time that the stimulus should first appear.
Stop [int, float] Governs the duration for which the stimulus is presented.
Items [List of dicts or csv / xlsx file] A list of dicts or csv file should have the following key, value pairs / column
headers: :index: The item index as a number :questionText: item question string :questionWidth: question width
between 0:1 :type: type of rating e.g., ‘radio’, ‘rating’, ‘slider’ :responseWidth: question width between 0:1
:options: A sequence of tick labels for options e.g., yes, no :layout: Response object layout e.g., ‘horiz’ or ‘vert’
Missing column headers will be replaced by default entries. The default entries are: :index: 0 (increments for
each item) :questionText: Default question :questionWidth: 0.7 :type: rating :responseWidth: 0.3 :options: Yes,
No :layout: horiz
Text height [float] Text height of the Form elements (i.e., question and response text).
Size [[X,Y]] Size of the stimulus, to be specified in ‘height’ units.
Pos [[X,Y]] The position of the centre of the stimulus, to be specified in ‘height’ units.
Item padding [float] Space or padding between Form elements (i.e., question and response text), to be specified in
‘height’ units.
Data format [menu] Choose whether to store items data by column or row in your datafile.
randomize [bool] Randomize order of Form elements
See also:
API reference for Form
The Grating stimulus allows a texture to be wrapped/cycled in 2 dimensions, optionally in conjunction with a mask
(e.g. Gaussian window). The texture can be a bitmap image from a variety of standard file formats, or a synthetic
texture such as a sinusoidal grating. The mask can also be derived from either an image, or mathematical form such
as a Gaussian.
When using gratings, if you want to use the spatial frequency setting then create just a single cycle of your texture and
allow PsychoPy to handle the repetition of that texture (do not create the cycles you’re expecting within the texture).
Gratings can have their position, orientation, size and other settings manipulated on a frame-by-frame basis. There is
a performance advantage (in terms of milliseconds) to using images which are square and powers of two (32, 64, 128,
etc.), however this is slight and would not be noticed in the majority of experiments.
Parameters
Name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
Start : The time that the stimulus should first appear. See Defining the onset/duration of components for details.
Stop : Governs the duration for which the stimulus is presented. See Defining the onset/duration of components for
details.
Color : See Color spaces
54 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
Advanced Settings
Texture: a filename, a standard name (sin, sqr) or a variable giving a numpy array This specifies the image that
will be used as the texture for the visual patch. The image can be repeated on the patch (in either x or y or both)
by setting the spatial frequency to be high (or can be stretched so that only a subset of the image appears by
setting the spatial frequency to be low). Filenames can be relative or absolute paths and can refer to most image
formats (e.g. tif, jpg, bmp, png, etc.). If this is set to none, the patch will be a flat colour.
Mask [a filename, a standard name (gauss, circle, raisedCos) or a numpy array of dimensions NxNx1] The mask can
define the shape (e.g. circle will make the patch circular) or something which overlays the patch e.g. noise.
Interpolate : If linear is selected then linear interpolation will be applied when the image is rescaled to the appropriate
size for the screen. Nearest will use a nearest-neighbour rule.
Phase [single float or pair of values [X,Y]] The position of the texture within the mask, in both X and Y. If a single
value is given it will be applied to both dimensions. The phase has units of cycles (rather than degrees or
radians), wrapping at 1. As a result, setting the phase to 0,1,2. . . is equivalent, causing the texture to be centered
on the mask. A phase of 0.25 will cause the image to shift by half a cycle (equivalent to pi radians). The
advantage of this is that is if you set the phase according to time it is automatically in Hz.
Spatial Frequency [[SFx, SFy] or a single value (applied to x and y)] The spatial frequency of the texture on the
patch. The units are dependent on the specified units for the stimulus/window; if the units are deg then the SF
units will be cycles/deg, if units are norm then the SF units will be cycles per stimulus. If this is set to none then
only one cycle will be displayed.
Texture Resolution [an integer (power of two)] Defines the size of the resolution of the texture for standard textures
such as sin, sqr etc. For most cases a value of 256 pixels will suffice, but if stimuli are going to be very small
then a lower resolution will use less memory.
See also:
API reference for GratingStim
The Image stimulus allows an image to be presented, which can be a bitmap image from a variety of standard file
formats, with an optional transparency mask that can effectively control the shape of the image. The mask can also be
derived from an image file, or mathematical form such as a Gaussian.
It is a really good idea to get your image in roughly the size (in pixels) that it will appear on screen to save
memory. If you leave the resolution at 12 megapixel camera, as taken from your camera, but then present it on
a standard screen at 1680x1050 (=1.6 megapixels) then PsychoPy and your graphics card have to do an awful
lot of unnecessary work. There is a performance advantage (in terms of milliseconds) to using images which are
square and powers of two (32, 64, 128, etc.), but this is slight and would not be noticed in the majority of experiments.
5.5. Components 55
PsychoPy - Psychology software for Python, Release 3.2.0
Images can have their position, orientation, size and other settings manipulated on a frame-by-frame basis.
Parameters
Name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
Start : The time that the stimulus should first appear. See Defining the onset/duration of components for details.
Stop : Governs the duration for which the stimulus is presented. See Defining the onset/duration of components for
details.
Image [a filename or a standard name (sin, sqr)] Filenames can be relative or absolute paths and can refer to most
image formats (e.g. tif, jpg, bmp, png, etc.). If this is set to none, the patch will be a flat colour.
Position [[X,Y]] The position of the centre of the stimulus, in the units specified by the stimulus or window
Size [[sizex, sizey] or a single value (applied to x and y)] The size of the stimulus in the given units of the stimu-
lus/window. If the mask is a Gaussian then the size refers to width at 3 standard deviations on either side of the
mean (i.e. sd=size/6) Set this to be blank to get the image in its native size.
Orientation [degrees] The orientation of the entire patch (texture and mask) in degrees.
Opacity [value from 0 to 1] If opacity is reduced then the underlying images/stimuli will show through
Units [deg, cm, pix, norm, or inherit from window] See Units for the window and stimuli
Advanced Settings
Color [Colors can be applied to luminance-only images (not to rgb images)] See Color spaces
Color space [to be used if a color is supplied] See Color spaces
Mask [a filename, a standard name (gauss, circle, raisedCos) or a numpy array of dimensions NxNx1] The mask can
define the shape (e.g. circle will make the patch circular) or something which overlays the patch e.g. noise.
Interpolate : If linear is selected then linear interpolation will be applied when the image is rescaled to the appropriate
size for the screen. Nearest will use a nearest-neighbour rule.
Texture Resolution: This is only needed if you use a synthetic texture (e.g. sinusoidal grating) as the image.
See also:
API reference for ImageStim
A button box is a hardware device that is used to collect participant responses with high temporal precision, ideally
with true ms accuracy.
Both the response (which button was pressed) and time taken to make it are returned. The time taken is determined by
a clock on the device itself. This is what makes it capable (in theory) of high precision timing.
Check the log file to see how long it takes for PsychoPy to reset the button box’s internal clock. If this takes a while,
then the RT timing values are not likely to be high precision. It might be possible for you to obtain a correction factor
for your computer + button box set up, if the timing delay is highly reliable.
The ioLabs button box also has a built-in voice-key, but PsychoPy does not have an interface for it. Use a microphone
component instead.
56 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
Properties
name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
start : The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop : The duration for which the stimulus is presented. See Defining the onset/duration of components for details.
Force end of Routine [checkbox] If this is checked, the first response will end the routine.
Active buttons [None, or an integer, list, or tuple of integers 0-7] The ioLabs box lets you specify a set of active
buttons. Responses on non-active buttons are ignored by the box, and never sent to PsychoPy. This field lets
you specify which buttons (None, or some or all of 0 through 7).
Lights : If selected, the lights above the active buttons will be turned on.
Using code components, it is possible to turn on and off specific lights within a trial. See the API for iolab.
Store [(choice of: first, last, all, nothing)] Which button events to save in the data file. Events and the response times
are saved, with RT being recorded by the button box (not by PsychoPy).
Store correct [checkbox] If selected, a correctness value will be saved in the data file, based on a match with the
given correct answer.
Correct answer: button The correct answer, used by Store correct.
Discard previous [checkbox] If selected, any previous responses will be ignored (typically this is what you want).
Lights off [checkbox] If selected, all lights will be turned off at the end of each routine.
See also:
API reference for iolab
The JoyButtons component can be used to collect gamepad/joystick button responses from a participant.
By not storing the button number pressed and checking the forceEndTrial box it can be used simply to end a Routine
If no gamepad/joystic is installed the keyboard can be used to simulate button presses by pressing ‘ctrl’ + ‘alt’ +
digit(0-9).
Parameters
Name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
Start [float or integer] The time that joyButtons should first get checked. See Defining the onset/duration of compo-
nents for details.
Stop [float or integer] When joyButtons should no longer get checked. See Defining the onset/duration of components
for details.
Force end routine : If this box is checked then the Routine will end as soon as one of the allowed buttons is pressed.
Allowed buttons : A list of allowed buttons can be specified here, e.g. [0,1,2,3], or the name of a variable holding
such a list. If this box is left blank then any button that is pressed will be read. Only allowed buttons count as
having been pressed; any other button will not be stored and will not force the end of the Routine. Note that
button numbers (0, 1, 2, 3, . . . ), should be separated by commas.
5.5. Components 57
PsychoPy - Psychology software for Python, Release 3.2.0
Store : Which button press, if any, should be stored; the first to be pressed, the last to be pressed or all that have been
pressed. If the button press is to force the end of the trial then this setting is unlikely to be necessary, unless two
buttons happen to be pressed in the same video frame. The response time will also be stored if a button press is
recorded. This time will be taken from the start of joyButtons checking (e.g. if the joyButtons was initiated 2
seconds into the trial and a button was pressed 3.2s into the trials the response time will be recorded as 1.2s).
Store correct : Check this box if you wish to store whether or not this button press was correct. If so then fill in
the next box that defines what would constitute a correct answer e.g. 1 or $corrAns (note this should not be
in inverted commas). This is given as Python code that should return True (1) or False (0). Often this correct
answer will be defined in the settings of the Loops.
Advanced Settings
Device number [integer] Which gamepad/joystick device number to use. The first device found is numbered 0.
The Joystick component can be used to collect responses from a participant. The coordinates of the joystick location
are given in the same coordinates as the Window, with (0,0) in the centre. Coordinates are correctly scaled for ‘norm’
and ‘height’ units. User defined scaling can be set by updating joystick.xFactor and joystick.yFactor to the desired val-
ues. Joystick.device.getX() and joystick.device.getY() always return ‘norm’ units. Joystick.getX() and joystick.getY()
are scaled by xFactor or yFactor
No cursor is drawn to represent the joystick current position, but this is easily provided by updating the position
of a partially transparent ‘.png’ immage on each screen frame using the joystick coordinates: joystick.getX() and
joystick.getY(). To ensure that the cursor image is drawon on top of other images it should be the last image in the
trial.
Joystick Emulation If no joystick device is found, the mouse and keyboard are used to emulate a joystick device.
Joystick position corresponds to mouse position and mouse buttons correspond to joystick buttons (0,1,2). Other
buttons can be simulated with key chords: ‘ctrl’ + ‘alt’ + digit(0..9).
Scenarios
This can be used in various ways. Here are some scenarios (email the list if you have other uses for your joystick):
Use the joystick to record the location of a button press
Use the joystick to control stimulus parameters Imagine you want to use your joystick to make your ‘patch’_ big-
ger or smaller and save the final size. Call your joystickComponent ‘joystick’, set it to save its state at the end
of the trial and set the button press to end the Routine. Then for the size setting of your Patch stimulus insert
$joystick.getX() to use the x position of the joystick to control the size or $joystick.getY() to use the y position.
Tracking the entire path of the joystick during a period
Parameters Basic
Name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
start : The time that the joystick should first be checked. See Defining the onset/duration of components for details.
stop : When the joystick is no longer checked. See Defining the onset/duration of components for details.
58 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
Force End Routine on Press If this box is checked then the Routine will end as soon as one of the joystick buttons
is pressed.
Save Joystick State How often do you need to save the state of the joystick? Every time the subject presses a joystick
button, at the end of the trial, or every single frame? Note that the text output for cases where you store the
joystick data repeatedly per trial (e.g. every press or every frame) is likely to be very hard to interpret, so you
may then need to analyse your data using the psydat file (with python code) instead. Hopefully in future releases
the output of the text file will be improved.
Time Relative To Whenever the joystick state is saved (e.g. on button press or at end of trial) a time is saved too. Do
you want this time to be relative to start of the Routine, or the start of the whole experiment?
Clickable Stimulus A comma-separated list of your stimulus names that ‘can be “clicked” by the participant. e.g.
target, foil.
Store params for clicked The params (e.g. name, text), for which you want to store the current value, for the stimulus
that was “clicked” by the joystick. Make sure that all the clickable objects have all these params.
Parameters Advanced
Device Number If you have multiple joystick/gamepad devices which one do you want (0, 1, 2, . . . ).
Allowed Buttons Joystick buttons accepted for input (blank for any) numbers separated by ‘commas’.
See also:
API reference for Joystick
Parameters
Name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
Start [float or integer] The time that the keyboard should first get checked. See Defining the onset/duration of com-
ponents for details.
Stop : When the keyboard is no longer checked. See Defining the onset/duration of components for details.
Force end routine If this box is checked then the Routine will end as soon as one of the allowed keys is pressed.
Allowed keys A list of allowed keys can be specified here, e.g. [‘m’,’z’,‘1’,‘2’], or the name of a variable holding
such a list. If this box is left blank then any key that is pressed will be read. Only allowed keys count as having
been pressed; any other key will not be stored and will not force the end of the Routine. Note that key names
(even for number keys) should be given in single quotes, separated by commas. Cursor control keys can be
accessed with ‘up’, ‘down’, and so on; the space bar is ‘space’. To find other special keys, run the Coder Input
demo, “what_key.py”, press the key, and check the Coder output window.
Store Which key press, if any, should be stored; the first to be pressed, the last to be pressed or all that have been
pressed. If the key press is to force the end of the trial then this setting is unlikely to be necessary, unless
two keys happen to be pressed in the same video frame. The response time will also be stored if a keypress
is recorded. This time will be taken from the start of keyboard checking (e.g. if the keyboard was initiated 2
seconds into the trial and a key was pressed 3.2s into the trials the response time will be recorded as 1.2s).
5.5. Components 59
PsychoPy - Psychology software for Python, Release 3.2.0
Store correct Check this box if you wish to store whether or not this key press was correct. If so then fill in the
next box that defines what would constitute a correct answer e.g. left, 1 or $corrAns (note this should not be
in inverted commas). This is given as Python code that should return True (1) or False (0). Often this correct
answer will be defined in the settings of the Loops.
Discard previous Check this box to ensure that only key presses that occur during this keyboard checking period are
used. If this box is not checked a keyboard press that has occurred before the start of the checking period will
be interpreted as the first keyboard press. For most experiments this box should be checked.
See also:
API reference for psychopy.event
if event.getKeys(['q']):
mic.stop()
Parameters
name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
start [float or integer] The time that the stimulus should first play. See Defining the onset/duration of components for
details.
stop (duration): The length of time (sec) to record for. An expected duration can be given for visualisation purposes.
See Defining the onset/duration of components for details; note that only seconds are allowed.
See also:
API reference for AdvAudioCapture
The Mouse component can be used to collect responses from a participant. The coordinates of the mouse location are
given in the same coordinates as the Window, with (0,0) in the centre.
60 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
Scenarios
This can be used in various ways. Here are some scenarios (email the list if you have other uses for your mouse):
Use the mouse to record the location of a button press
Use the mouse to control stimulus parameters Imagine you want to use your mouse to make your ‘patch’_ bigger
or smaller and save the final size. Call your mouse ‘mouse’, set it to save its state at the end of the trial and set
the button press to end the Routine. Then for the size setting of your Patch stimulus insert $mouse.getPos()[0]
to use the x position of the mouse to control the size or $mouse.getPos()[1] to use the y position.
Tracking the entire path of the mouse during a period
Parameters
Name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
start : The time that the mouse should first be checked. See Defining the onset/duration of components for details.
stop : When the mouse is no longer checked. See Defining the onset/duration of components for details.
Force End Routine on Press If this box is checked then the Routine will end as soon as one of the mouse buttons is
pressed.
Save Mouse State How often do you need to save the state of the mouse? Every time the subject presses a mouse
button, at the end of the trial, or every single frame? Note that the text output for cases where you store the
mouse data repeatedly per trial (e.g. every press or every frame) is likely to be very hard to interpret, so you may
then need to analyse your data using the psydat file (with python code) instead. Hopefully in future releases the
output of the text file will be improved.
Time Relative To Whenever the mouse state is saved (e.g. on button press or at end of trial) a time is saved too. Do
you want this time to be relative to start of the Routine, or the start of the whole experiment?
See also:
API reference for Mouse
The Movie component allows movie files to be played from a variety of formats (e.g. mpeg, avi, mov).
The movie can be positioned, rotated, flipped and stretched to any size on the screen (using the Units for the window
and stimuli given).
Parameters
name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
start : The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop : Governs the duration for which the stimulus is presented (if you want to cut a movie short). Usually you can
leave this blank and insert the Expected duration just for visualisation purposes. See Defining the onset/duration
of components for details.
movie [string] The filename of the movie, including the path. The path can be absolute or relative to the location of
the experiment (.psyexp) file.
5.5. Components 61
PsychoPy - Psychology software for Python, Release 3.2.0
pos [[X,Y]] The position of the centre of the stimulus, in the units specified by the stimulus or window
ori [degrees] Movies can be rotated in real-time too! This specifies the orientation of the movie in degrees.
size [[sizex, sizey] or a single value (applied to both x and y)] The size of the stimulus in the given units of the
stimulus/window.
units [deg, cm, pix, norm, or inherit from window] See Units for the window and stimuli
See also:
API reference for MovieStim
This component allows you to send triggers to a parallel port or to a LabJack device.
An example usage would be in EEG experiments to set the port to 0 when no stimuli are present and then set it to an
identifier value for each stimulus synchronised to the start/stop of that stimulus. In that case you might set the Start
data to be $ID (with ID being a column in your conditions file) and set the Stop Data to be 0.
Properties
Name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
Start : The time that the stimulus should first appear. See Defining the onset/duration of components for details.
Stop : Governs the duration for which the stimulus is presented. See Defining the onset/duration of components for
details.
Port address [select the appropriate option] You need to know the address of the parallel port you wish to write to.
The options that appear in this drop-down list are determined by the application preferences. You can add your
particular port there if you prefer.
Start data [0-255] When the start time/condition occurs this value will be sent to the parallel port. The value is given
as a byte (a value from 0-255) controlling the 8 data pins of the parallel port.
Stop data [0-255] As with start data but sent at the end of the period.
Sync to screen [boolean] If true then the parallel port will be sent synchronised to the next screen refresh, which is
ideal if it should indicate the onset of a visual stimulus. If set to False then the data will be set on the parallel
port immediately.
See also:
API reference for iolab
The Patch stimulus allows images to be presented in a variety of forms on the screen. It allows the combination of an
image, which can be a bitmap image from a variety of standard file formats, or a synthetic repeating texture such as a
sinusoidal grating. A transparency mask can also be control the shape of the image, and this can also be derived from
either a second image, or mathematical form such as a Gaussian.
Patches can have their position, orientation, size and other settings manipulated on a frame-by-frame basis. There is a
performance advantage (in terms of milliseconds) to using images which are square and powers of two (32, 64, 128,
etc.), however this is slight and would not be noticed in the majority of experiments.
62 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
Parameters
name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
start : The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop : Governs the duration for which the stimulus is presented. See Defining the onset/duration of components for
details.
image [a filename, a standard name (‘sin’, ‘sqr’) or a numpy array of dimensions NxNx1 or NxNx3] This specifies
the image that will be used as the texture for the visual patch. The image can be repeated on the patch (in either
x or y or both) by setting the spatial frequency to be high (or can be stretched so that only a subset of the image
appears by setting the spatial frequency to be low). Filenames can be relative or absolute paths and can refer to
most image formats (e.g. tif, jpg, bmp, png, etc.). If this is set to none, the patch will be a flat colour.
mask [a filename, a standard name (‘gauss’, ‘circle’) or a numpy array of dimensions NxNx1] The mask can define
the shape (e.g. circle will make the patch circular) or something which overlays the patch e.g. noise.
ori [degrees] The orientation of the entire patch (texture and mask) in degrees.
pos [[X,Y]] The position of the centre of the stimulus, in the units specified by the stimulus or window
size [[sizex, sizey] or a single value (applied to x and y)] The size of the stimulus in the given units of the stimu-
lus/window. If the mask is a Gaussian then the size refers to width at 3 standard deviations on either side of the
mean (i.e. sd=size/6)
units [deg, cm, pix, norm, or inherit from window] See Units for the window and stimuli
Advanced Settings
5.5. Components 63
PsychoPy - Psychology software for Python, Release 3.2.0
The Polygon stimulus allows you to present a wide range of regular geometric shapes. The basic control comes from setting the n
Parameters
name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
nVertices : integer
The number of vertices for your shape (2 gives a line, 3 gives a triangle,. . . a large number results in a
circle/ellipse). It is not (currently) possible to vary the number of vertices dynamically.
fill settings:
Control the color inside the shape. If you set this to None then you will have a transparent shape (the line
will remain)
line settings:
Control color and width of the line. The line width is always specified in pixels - it does not honour the
units parameter.
size [[w,h]] See note above
start : The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop : Governs the duration for which the stimulus is presented. See Defining the onset/duration of components for
details.
ori [degrees] The orientation of the entire patch (texture and mask) in degrees.
pos [[X,Y]] The position of the centre of the stimulus, in the units specified by the stimulus or window
units [deg, cm, pix, norm, or inherit from window] See Units for the window and stimuli
See also:
API reference for Polygon API reference for Rect API reference for ShapeStim #for arbitrary vertices
This component allows you to deliver liquid stimuli using a Cetoni neMESYS syringe pump.
Please specify the name of the pump configuration to use in the PsychoPy preferences under Hardware / Qmix
pump configuration. See the readme file of the pyqmix project for details on how to set up your computer and
create the configuration file.
64 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
Properties
Name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
Start : The time that the stimulus should first appear.
Stop : Governs the duration for which the stimulus is presented.
Pump index [int] The index of the pump: The first pump’s index is 0, the second pump’s index is 1, etc. You may
insert the name of a variable here to adjust this value dynamically.
Syringe type [select the appropriate option] Currently, 25 mL and 50 mL glass syringes are supported. This setting
ensures that the pump will operate at the correct flow rate.
Pump action [aspirate or dispense] Whether to fill (aspirate) or to empty (dispense) the syringe.
Flow rate [float] The flow rate in the selected flow rate units.
Flow rate unit [mL/s or mL/min] The unit in which the flow rate values are supplied.
Switch valve after dosing [bool] Whether to switch the valve osition after the pump operation has finished. This can
be used to ensure a sharp(er) stimulus offset.
Sync to screen [bool] Whether to synchronize the pump operations (starting, stopping) to the screen refresh. This
ensures better synchronization with visual stimuli.
A rating scale is used to collect a numeric rating or a choice from a few alternatives, via the mouse, the keyboard, or
both. Both the response and time taken to make it are returned.
A given routine might involve an image (patch component), along with a rating scale to collect the response. A routine
from a personality questionnaire could have text plus a rating scale.
Three common usage styles are enabled on the first settings page: ‘visual analog scale’: the subject uses the
mouse to position a marker on an unmarked line
‘category choices’: choose among verbal labels (categories, e.g., “True, False” or “Yes, No, Not sure”)
‘scale description’: used for numeric choices, e.g., 1 to 7 rating
Complete control over the display options is available as an advanced setting, ‘customize_everything’.
Properties
name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
start : The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop : The duration for which the stimulus is presented. See Defining the onset/duration of components for details.
visualAnalogScale [checkbox] If this is checked, a line with no tick marks will be presented using the ‘glow’ marker,
and will return a rating from 0.00 to 1.00 (quasi-continuous). This is intended to bias people away from thinking
in terms of numbers, and focus more on the visual bar when making their rating. This supersedes either choices
or scaleDescription.
category choices [string] Instead of a numeric scale, you can present the subject with words or phrases to choose
from. Enter all the words as a string. (Probably more than 6 or so will not look so great on the screen.) Spaces
5.5. Components 65
PsychoPy - Psychology software for Python, Release 3.2.0
are assumed to separate the words. If there are any commas, the string will be interpreted as a list of words or
phrases (possibly including spaces) that are separated by commas.
scaleDescription : Brief instructions, reminding the subject how to interpret the numerical scale, default = “1 = not
at all . . . extremely = 7”
low [str] The lowest number (bottom end of the scale), default = 1. If it’s not an integer, it will be converted to
lowAnchorText (see Advanced).
high [str] The highest number (top end of the scale), default = 7. If it’s not an integer, it will be converted to
highAnchorText (see Advanced).
Advanced settings
single click : If this box is checked the participant can only click the scale once and their response will be stored. If
this box is not checked the participant must accept their rating before it is stored.
startTime [float or integer] The time (relative to the beginning of this Routine) that the rating scale should first appear.
forceEndTrial : If checked, when the subject makes a rating the routine will be ended.
size [float] The size controls how big the scale will appear on the screen. (Same as “displaySizeFactor”.) Larger than
1 will be larger than the default, smaller than 1 will be smaller than the default.
pos [[X,Y]] The position of the centre of the stimulus, in the units specified by the stimulus or window. Default is
centered left-right, and somewhat lower than the vertical center (0, -0.4).
duration : The maximum duration in seconds for which the stimulus is presented. See duration for details. Typically,
the subject’s response should end the trial, not a duration. A blank or negative value means wait for a very long
time.
storeRatingTime: Save the time from the beginning of the trial until the participant responds.
storeRating: Save the rating that was selected
lowAnchorText [str] Custom text to display at the low end of the scale, e.g., “0%”; overrides ‘low’ setting
highAnchorText [str] Custom text to display at the low end of the scale, e.g., “100%”; overrides ‘high’ setting
customize_everything [str] If this is not blank, it will be used when initializing the rating scale just as it would
be in a code component (see RatingScale). This allows access to all the customizable aspects of a rating
scale, and supersedes all of the other RatingScale settings in the dialog panel. (This does not affect: startTime,
forceEndTrial, duration, storeRatingTime, storeRating.)
See also:
API reference for RatingScale
Parameters
name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
start [float or integer] The time that the stimulus should first play. See Defining the onset/duration of components for
details.
stop : For sounds loaded from a file leave this blank and then give the Expected duration below for visualisation
purposes. See Defining the onset/duration of components for details.
66 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
Parameters
name : Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers
and underscores (no punctuation marks or spaces).
start : The time that the static period begins. See Defining the onset/duration of components for details.
stop : The time that the static period ends. See Defining the onset/duration of components for details.
custom code : After running the component updates (which are defined in each component, not here) any code in-
serted here will also be run
See also:
API reference for StaticPeriod
This component can be used to present text to the participant, either instructions or stimuli.
name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces).
5.5. Components 67
PsychoPy - Psychology software for Python, Release 3.2.0
start : The time that the stimulus should first appear. See Defining the onset/duration of components for details.
stop : The duration for which the stimulus is presented. See Defining the onset/duration of components for details.
color : See Color spaces
color space [rgb, dkl or lms] See Color spaces
ori [degrees] The orientation of the stimulus in degrees.
pos [[X,Y]] The position of the centre of the stimulus, in the units specified by the stimulus or window
height [integer or float] The height of the characters in the given units of the stimulus/window. Note that nearly all
actual letters will occupy a smaller space than this, depending on font, character, presence of accents etc. The
width of the letters is determined by the aspect ratio of the font.
units [deg, cm, pix, norm, or inherit from window] See Units for the window and stimuli
opacity : Vary the transparency, from 0.0 = invisible to 1.0 = opaque
flip : Whether to mirror-reverse the text: ‘horiz’ for left-right mirroring, ‘vert’ for up-down mirroring. The flip can
be set dynamically on a per-frame basis by using a variable, e.g., $mirror, as defined in a code component or
conditions file and set to either ‘horiz’ or ‘vert’.
See also:
API reference for TextStim
A variable can hold quantities or values in memory that can be referenced using a variable name. You can store values
in a variable to use in your experiments.
Parameters
Name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters,
numbers and underscores (no punctuation marks or spaces). The variable name references the value stored in
memory, so that your stored values can be used in your experiments.
Start [int, float or bool] The time or condition from when you want your variable to be defined. The default value is
None, and so will be defined at the beginning of the experiment, trial or frame. See Defining the onset/duration
of components for details.
Stop [int, float or bool] The duration for which the variable is defined/updated. See Defining the onset/duration of
components for details.
Experiment start value: any The variable can take any value at the beginning of the experiment, so long as you
define you variables using literals or existing variables.
Routine start value [any] The variable can take any value at the beginning of a routine/trial, and can remain a con-
stant, or be defined/updated on every routine.
Frame start value [any] The variable can take any value at the beginning of a frame, or during a condition bases on
Start and/or Stop.
Save exp start value [bool] Choose whether or not to save the experiment start value to your data file.
Save routine start value [bool] Choose whether or not to save the routine start value to your data file.
Save frame value [bool and drop=down menu] Frame values are contained within a list for each trial, and discarded
at the end of each trial. Choose whether or not to take the first, last or average variable values from the frame
container, and save to your data file.
68 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
Save routine start value [bool] Choose whether or not to save the routine end value to your data file.
Save exp start value [bool] Choose whether or not to save the experiment end value to your data file.
Most of the entry boxes for Component parameters simply receive text or numeric values or lists (sequences of values
surrounded by square brackets) as input. In addition, the user can insert variables and code into most of these, which
will be interpreted either at the beginning of the experiment or at regular intervals within it.
To indicate to PsychoPy that the value represents a variable or python code, rather than literal text, it should be preceded
by a $. For example, inserting intensity into the text field of the Text Component will cause that word literally to be
presented, whereas $intensity will cause python to search for the variable called intensity in the script.
Variables associated with Loops can also be entered in this way (see Accessing loop parameters from components for
further details). But it can also be used to evaluate arbitrary python code.
For example:
• $random(2) will generate a pair of random numbers
• $”yn”[randint(2)] will randomly choose the first or second character (y or n)
• $globalClock.getTime() will insert the current time in secs of the globalClock object
• $[sin(angle), cos(angle)] will insert the sin and cos of an angle (e.g. into the x,y coords of a stimulus)
If you do want the parameters of a stimulus to be evaluated by code in this way you need also to decide how often it
should be updated. By default, the parameters of Components are set to be constant; the parameter will be set at the
beginning of the experiment and will remain that way for the duration. Alternatively, they can be set to change either
on every repeat in which case the parameter will be set at the beginning of the Routine on each repeat of it. Lastly
many parameters can even be set on every frame, allowing them to change constantly on every refresh of the screen.
The settings menu can be accessed by clicking the icon at the top of the window. It allows the user to set various
aspects of the experiment, such as the size of the window to be used or what information is gathered about the subject
and determine what outputs (data files) will be generated.
5.6.1 Settings
Basic settings
Experiment name: A name that will be stored in the metadata of the data file.
Show info dlg: If this box is checked then a dialog will appear at the beginning of the experiment allowing the
Experiment Info to be changed.
Experiment Info: This information will be presented in a dialog box at the start and will be saved with any data files
and so can be used for storing information about the current run of the study. The information stored here can
also be used within the experiment. For example, if the Experiment Info included a field called ori then Builder
Components could access expInfo[‘ori’] to retrieve the orientation set here. Obviously this is a useful way to
run essentially the same experiment, but with different conditions set at run-time.
Enable escape: If ticked then the Esc key can be used to exit the experiment at any time (even without a keyboard
component)
Data settings
Data filename: (new in version 1.80.00): A formatted string to control the base filename and path, often based on
variables such as the date and/or the participant. This base filename will be given the various extensions for the
different file types as needed. Examples:
Save Excel file: If this box is checked an Excel data file (.xlsx) will be stored.
Save csv file: If this box is checked a comma separated variable (.csv) will be stored.
Save psydat file: If this box is checked a PsychoPy data file (.psydat) will be stored. This is a Python specific format
(.pickle files) which contains more information that .xlsx or .csv files that can be used with data analysis and
plotting scripts written in Python. Whilst you may not wish to use this format it is recommended that you always
save a copy as it contains a complete record of the experiment at the time of data collection.
Save log file A log file provides a record of what occurred during the experiment in chronological order, including
information about any errors or warnings that may have occurred.
Logging level How much detail do you want to be output to the log file, if it is being saved. The lowest level is error,
which only outputs error messages; warning outputs warnings and errors; info outputs all info, warnings and
errors; debug outputs all info that can be logged. This system enables the user to get a great deal of information
while generating their experiments, but then reducing this easily to just the critical information needed when
actually running the study. If your experiment is not behaving as you expect it to, this is an excellent place to
begin to work out what the problem is.
Screen settings
Monitor The name of the monitor calibration. Must match one of the monitor names from Monitor Center.
Screen: If multiple screens are available (and if the graphics card is not an intel integrated graphics chip) then the
user can choose which screen they use (e.g. 1 or 2).
Full-screen window: If this box is checked then the experiment window will fill the screen (overriding the window
size setting and using the size that the screen is currently set to in the operating system settings).
Window size: The size of the window in pixels, if this is not to be a full-screen window.
Units The default units of the window (see Units for the window and stimuli). These can be overridden by individual
Components.
As of version 1.70.00, the onset and offset times of stimuli can be defined in several ways.
70 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
Start and stop times can be entered in terms of seconds (time (s)), by frame number (frameN) or in relation to another
stimulus (condition). Condition would be used to make Components start or stop depending on the status of something
else, for example when a sound has finished. Duration can also be varied using a Code Component.
If you need very precise timing (particularly for very brief stimuli for instance) then it is best to control your on-
set/duration by specifying the number of frames the stimulus will be presented for.
Measuring duration in seconds (or milliseconds) is not very precise because it doesn’t take into account the fact that
your monitor has a fixed frame rate. For example if the screen has a refresh rate of 60Hz you cannot present your
stimulus for 120ms; the frame rate would limit you to 116.7ms (7 frames) or 133.3ms (8 frames). The duration of a
frame (in seconds) is simply 1/refresh rate in Hz.
Condition would be used to make Components start or stop depending on the status of something else, for example
when a movie has finished. Duration can also be varied using a code component.
In cases where PsychoPy cannot determine the start/endpoint of your Component (e.g. because it is a variable) you can
enter an ‘Expected’ start/duration. This simply allows components with variable durations to be drawn in the Routine
window. If you do not enter the approximate duration it will not be drawn, but this will not affect experimental
performance.
For more details of how to achieve good temporal precision see Timing Issues and synchronisation
5.7.1 Examples
• Use time(s) or frameN and simply enter numeric values into the start and duration boxes.
• Use time(s) or frameN and enter a numeric value into the start time and set the duration to a variable name by
preceeding it with a $ as described here. Then set expected time to see an approximation in your routine
• Use condition to cause the stimulus to start immediately after a movie component called myMovie, by entering
$myMovie.status==FINISHED into the start time.
• If your experiment is not behaving in the way that you expect. Have you looked at the log file? This can point
you in the right direction. Did you know you can change the type of information that is stored in the log file in
preferences by changing the logging level.
• Have you tried compiling the script and running it. Does this produce a particular error message that points you
at a particular problem area? You can also change things in a more detailed way in the coder view and if you are
having problems, reading through the script can highlight problems. Reading a compiled script can also help
with the creation of a Code Component
• Have you checked the size of your stimulus? If it is 0.5x0.5 pixels you won’t be able to see it!
• Have you checked the position of your stimulus? Is it positioned off the screen?
• Have you remembered to specify the file you want to use when setting up the loop?
• Have you remembered to add the variables proceeded by the $ symbol to your stimuli?
5.9.4 I just want a plain square, but it’s turning into a grating
• If you don’t want your stimulus to have a texture, you need Image to be None
• Have you remembered to put a $ symbol at the beginning (this isn’t necessary, and should be avoided in a Code
Component)?
• A dollar sign as the first character of a line indicates to PsychoPy that the rest of the line is code. It does not
indicate a variable name (unlike in perl or php). This means that if you are, for example, using variables to
determine position, enter $[x,y]. The temptation is to use [$x,$y], which will not work.
• Have you changed the setting for the variable that you want to change to ‘change every repeat’ (or ‘change every
frame’)?
5.9.7 I’m getting the error message AttributeError: ‘unicode object has no attribute
‘XXXX’
• This type of error is usually caused by a naming conflict. Whilst we have made every attempt to make sure that
these conflicts produce a warning message it is possible that they may still occur.
• The most common source of naming conflicts in an external file which has been imported to be used in a loop
i.e. .xlsx, .csv.
• Check to make sure that all of the variable names are unique. There can be no repeated variable names anywhere
in your experiment.
72 Chapter 5. Builder
PsychoPy - Psychology software for Python, Release 3.2.0
• Have you checked all of your variable entries are accepted commands e.g. gauss but not Gauss
• If you compile your experiment and run it from the coder window what does the error message say? Does it
point you towards a particular variable which may be incorrectly formatted?
If you are having problems getting the application to run please see Troubleshooting
If you click the compile script icon this will display the script for your experiment in the Coder window.
This can be used for debugging experiments, entering small amounts of code and learning a bit about writing scripts
amongst other things.
The code is fully commented and so this can be an excellent introduction to writing your own code.
It’s a really good idea to tell PsychoPy about the set up of your monitor, especially the size in cm and pixels and its
distance, so that PsychoPy can present your stimuli in units that will be consistent in another lab with a different set
up (e.g. cm or degrees of visual angle).
You should do this in Monitor Center which can be opened from Builder by clicking on the icon that shows two
monitors. In Monitor Center you can create settings for multiple configurations, e.g. different viewing distances or
different physical devices and then select the appropriate one by name in your experiments or scripts.
Having set up your monitor settings you should then tell PsychoPy which of your monitor setups to use for this
experiment by going to the Experiment settings dialog.
The new big feature, which we’re really excited about is that Builder experiments are going to web-enabled very soon!
Make sure you watch for new posts in the PsychoPy forum Announcements category so you get updates of when this
is available.
74 Chapter 5. Builder
CHAPTER
SIX
CODER
Note: These do not teach you about Python per se, and you are recommended also to learn about that (Python has
many excellent tutorials for programmers and non-programmers alike). In particular, dictionaries, lists and numpy
arrays are used a great deal in most PsychoPy experiments.
You can learn to use the scripting interface to PsychoPy in several ways, and you should probably follow a combination
of them:
• Basic Concepts: some of the logic of PsychoPy scripting
• PsychoPy Tutorials: walk you through the development of some semi-complete experiments
• demos: in the demos menu of Coder view. Many and varied
• use the Builder to compile a script and see how it works
• check the Reference Manual (API) for further details
• ultimately go into PsychoPy and start examining the source code. It’s just regular python!
Note: Before you start, tell PsychoPy about your monitor(s) using the Monitor Center. That way you get to use units
(like degrees of visual angle) that will transfer easily to other computers.
Stimulus objects
Python is an ‘object-oriented’ programming language, meaning that most stimuli in PsychoPy are represented by
python objects, with various associated methods and information.
Typically you should create your stimulus with the initial desired attributes once, at the beginning of the script, and
then change select attributes later (see section below on setting stimulus attributes). For instance, create your text and
then change its color any time you like:
75
PsychoPy - Psychology software for Python, Release 3.2.0
However, scalars can also be used to assign x,y-pairs. In that case, both x and y get the value of the scalar. E.g.:
stim.size = 0.5
print stim.size # array([0.5, 0.5])
Operations on attributes: Operations during assignment of attributes are a handy way to smoothly alter the appear-
ance of your stimuli in loops.
Most scalars and x,y-pairs support the basic operations:
76 Chapter 6. Coder
PsychoPy - Psychology software for Python, Release 3.2.0
However, they can also be used on x,y-pairs in very flexible ways. Here you can use both scalars and x,y-pairs
as operators. In the latter case, the operations are element-wise:
Timing
# Setup stimulus
win = visual.Window([400, 400])
gabor = visual.GratingStim(win, tex='sin', mask='gauss', sf=5, name='gabor')
gabor.autoDraw = True # Automatically draw every frame
gabor.autoLog = False # Or we'll get many messages about phase change
Clocks are accurate to around 1ms (better on some platforms), but using them to time stimuli is not very accurate
because it fails to account for the fact that one frame on your monitor has a fixed frame rate. In the above, the stimulus
does not actually get drawn for exactly 0.5s (500ms). If the screen is refreshing at 60Hz (16.7ms per frame) and the
getTime() call reports that the time has reached 1.999s, then the stimulus will draw again for a frame, in accordance
with the while loop statement and will ultimately be displayed for 2.0167s. Alternatively, if the time has reached
2.001s, there will not be an extra frame drawn. So using this method you get timing accurate to the nearest frame
period but with little consistent precision. An error of 16.7ms might be acceptable to long-duration stimuli, but not to
a brief presentation. It also might also give the false impression that a stimulus can be presented for any given period.
At 60Hz refresh you can not present your stimulus for, say, 120ms; the frame period would limit you to a period of
116.7ms (7 frames) or 133.3ms (8 frames).
As a result, the most precise way to control stimulus timing is to present them for a specified number of frames. The
frame rate is extremely precise, much better than ms-precision. Calls to Window.flip() will be synchronised to the
frame refresh; the script will not continue until the flip has occurred. As a result, on most cards, as long as frames are
not being ‘dropped’ (see Detecting dropped frames) you can present stimuli for a fixed, reproducible period.
Note: Some graphics cards, such as Intel GMA graphics chips under win32, don’t support frame sync. Avoid
integrated graphics for experiment computers wherever possible.
Using the concept of fixed frame periods and flip() calls that sync to those periods we can time stimulus presentation
extremely precisely with the following:
# Setup stimulus
win = visual.Window([400, 400])
gabor = visual.GratingStim(win, tex='sin', mask='gauss', sf=5,
name='gabor', autoLog=False)
fixation = visual.GratingStim(win, tex=None, mask='gauss', sf=0, size=0.02,
name='fixation', autoLog=False)
# Let's draw a stimulus for 200 frames, drifting for frames 50:100
for frameN in range(200): # For exactly 200 frames
if 10 <= frameN < 150: # Present fixation for a subset of frames
fixation.draw()
if 50 <= frameN < 100: # Present stim for a different subset
gabor.phase += 0.1 # Increment by 10th of cycle
gabor.draw()
win.flip()
Using autoDraw
Stimuli are typically drawn manually on every frame in which they are needed, using the draw() function. You can
also set any stimulus to start drawing every frame using stim.autoDraw = True or stim.autoDraw = False. If you use
these commands on stimuli that also have autoLog=True, then these functions will also generate a log message on the
frame when the first drawing occurs and on the first frame when it is confirmed to have ended.
TrialHandler and StairHandler can both generate data outputs in which responses are stored, in relation to the stimulus
conditions. In addition to those data outputs, PsychoPy can create detailed chronological log files of events during the
experiment.
78 Chapter 6. Coder
PsychoPy - Psychology software for Python, Release 3.2.0
Log messages have various levels of severity: ERROR, WARNING, DATA, EXP, INFO and DEBUG
Multiple targets can also be created to receive log messages. Each target has a particular critical level and receives all
logged messages greater than that. For example, you could set the console (visual output) to receive only warnings
and errors, have a central log file that you use to store warning messages across studies (with file mode append), and
another to create a detailed log of data and events within a single study with level=INFO:
For performance purposes log files are not actually written when the log commands are ‘sent’. They are stored in a
list and processed automatically when the script ends. You might also choose to force a flush of the logged messages
manually during the experiment (e.g. during an inter-trial interval):
...
This should only be necessary if you want to see the logged information as the experiment progresses.
AutoLogging
Manual methods
In addition to a variety of automatic logging messages, you can create your own, of various levels. These can be
timestamped immediately:
There are additional convenience functions for the above: logging.warn(‘a warning’) etc.
For stimulus changes you probably want the log message to be timestamped based on the frame flip (when the stimulus
is next presented) rather than the time that the log message is sent:
TrialHandler
This is what underlies the random and sequential loop types in Builder, they work using the method of constants. The
trialHandler presents a predetermined list of conditions in either a sequential or random (without replacement) order.
see TrialHandler for more details.
StairHandler
This generates the next trial using an adaptive staircase. The conditions are not predetermined and are generated based
on the participant’s responses.
Staircases are predominately used in psychophysics to measure the discrimination and detection thresholds. However
they can be used in any experiment which varies a numeric value as a result of a 2 alternative forced choice (2AFC)
response.
The StairHandler systematically generates numbers based on staircase parameters. These can then be used to define a
stimulus parameter e.g. spatial frequency, stimulus presentation duration. If the participant gives the incorrect response
the number generated will get larger and if the participant gives the correct response the number will get smaller.
see StairHandler for more details
Global event keys are single keys (or combinations of a single key and one or more “modifier” keys such as Ctrl,
Alt, etc.) with an associated Python callback function. This function will be executed if the key (or key/modifiers
combination) was pressed.
80 Chapter 6. Coder
PsychoPy - Psychology software for Python, Release 3.2.0
Note: Global event keys only work with the pyglet backend, which is the default.
PsychoPy fully automatically monitors and processes key presses during most portions of the experimental run, for
example during core.wait() periods, or when calling win.flip(). If a global event key press is detected, the specified
function will be run immediately. You are not required to manually poll and check for key presses. This can be
particularly useful to implement a global “shutdown” key, or to trigger laboratory equipment on a key press when
testing your experimental script – without cluttering the code. But of course the application is not limited to these two
scenarios. In fact, you can associate any Python function with a global event key.
All active global event keys are stored in event.globalKeys.
First, let’s ensure no global event keys are currently set by calling func:event.globalKeys.clear.
To add a new global event key, you need to invoke func:event.globalKeys.add. This function has two required argu-
ments: the key name, and the function to associate with that key.
Look at event.globalKeys, we can see that the global event key has indeed been created.
>>> event.globalKeys
<_GlobalEventKeys :
[A] -> 'myfunc' <function myfunc at 0x10669ba28>
>
Your output should look similar. You may happen to spot We can take a closer look at the specific global key event we
added.
>>> event.globalKeys['a']
_GlobalEvent(func=<function myfunc at 0x10669ba28>, func_args=(), func_kwargs={},
˓→name='myfunc')
Note: Pressing the key won’t do anything unless a psychopy.visual.Window is created and and its
:func:~‘psychopy.visual.Window.flip‘ method or psychopy.core.wait() are called.
We are going to associate a function with a more complex calling signature (with positional and keyword arguments)
with a global event key. First, let’s create the dummy function:
Next, compile some positional and keyword arguments and a custom name for this event. Positional arguments must
be passed as tists or uples, and keyword arguments as dictionaries.
Note: Even when intending to pass only a single positional argument, args must be a list or tuple, e.g., args = [1] or
args = (1,).
Finally, specify the key and a combination of modifiers. While key names are just strings, modifiers are lists or tuples
of modifier names.
Note: Even when specifying only a single modifier key, modifiers must be a list or tuple, e.g., modifiers = [‘ctrl’] or
modifiers = (‘ctrl’,).
>>> event.globalKeys
<_GlobalEventKeys :
[A] -> 'myfunc' <function myfunc at 0x10669ba28>
[CTRL] + [ALT] + [B] -> 'my name' <function myfunc2 at 0x112eecb90>
>
The key combination [CTRL] + [ALT] + [B] is now associated with the function myfunc2, which will be called in the
following way:
Indexing
event.globalKeys can be accessed like an ordinary dictionary. The index keys are (key, modifiers) namedtuples.
82 Chapter 6. Coder
PsychoPy - Psychology software for Python, Release 3.2.0
>>> event.globalKeys.keys()
[_IndexKey(key='a', modifiers=()), _IndexKey(key='b', modifiers=('ctrl', 'alt'))]
To access the global event associated with the key combination [CTRL] + [ALT] + [B], we can do
To make access more convenient, specifying the modifiers is optional in case none were passed to psychopy.
event.globalKeys.add() when the global event key was added, meaning the following commands are identi-
cal.
>>> event.globalKeys['a']
_GlobalEvent(func=<function myfunc at 0x10669ba28>, func_args=(), func_kwargs={},
˓→name='myfunc')
The number of currently active event keys can be retrieved by passing event.globalKeys to the len() function.
>>> len(event.globalKeys)
2
psychopy.event.globalKeys.remove()
To remove a single key, pass the key name and modifiers (if any) to psychopy.event.globalKeys.
remove().
>>> event.globalKeys.remove(key='a')
A convenience method to quickly delete all global event keys is to pass key=’all’
>>> event.globalKeys.remove(key='all')
del
Like with other dictionaries, items can be removed from event.globalKeys by using the del statement. The provided
index key must be specified as described in Indexing.
psychopy.event.globalKeys.pop()
Again, as other dictionaries, event.globalKeys provides a pop method to retrieve an item and remove it from the dict.
The first argument to pop is the index key, specified as described in Indexing. The second argument is optional. Its
value will be returned in case no item with the matching indexing key could be found, for example if the item had
already been removed previously.
The PsychoPy preferences for shutdownKey and shutdownKeyModifiers (both unset by default) will be used to auto-
matically create a global shutdown key. To demonstrate this automated behavior, let us first change the preferences
programmatically (these changes will be lost when quitting the current Python session).
We can now check if a global shutdown key has been automatically created.
84 Chapter 6. Coder
PsychoPy - Psychology software for Python, Release 3.2.0
Of course you can very easily add a global shutdown key manually, too. You simply have to associate a key with
:func:~‘psychopy.core.quit‘.
>>> from psychopy import core, event
>>> event.globalKeys.add(key='q', func=core.quit, name='shutdown')
That’s it!
A working example
In the above code snippets, our global event keys were not actually functional, as we didn’t create a window, which is
required to actually collect the key presses. Our working example will thus first create a window and then add global
event keys to change the window color and quit the experiment, respectively.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
win = visual.Window(color='gray')
text = visual.TextStim(win,
text='Press C to change color,\n CTRL + Q to quit.')
# Global event key (with modifier) to quit the experiment ("shutdown key").
event.globalKeys.add(key='q', modifiers=['ctrl'], func=core.quit)
while True:
text.draw()
win.flip()
PsychoPy has been designed to handle your screen calibrations for you. It is also designed to operate (if possible) in
the final experimental units that you like to use e.g. degrees of visual angle.
In order to do this PsychoPy needs to know a little about your monitor. There is a GUI to help with this (select
MonitorCenter from the tools menu of PsychoPyIDE or run . . . site-packages/monitors/MonitorCenter.py).
In the MonitorCenter window you can create a new monitor name, insert values that describe your monitor and run
calibrations like gamma corrections. For now you can just stick to the [testMonitor] but give it correct values for your
screen size in number of pixels and width in cm.
Now, when you create a window on your monitor you can give it the name ‘testMonitor’ and stimuli will know how
they should be scaled appropriately.
Building stimuli is extremely easy. All you need to do is create a Window, then some stimuli. Draw those stimuli,
then update the window. PsychoPy has various other useful commands to help with timing too. Here’s an example.
Type it into a coder window, save it somewhere and press run.
1 from psychopy import visual, core # import some libraries from PsychoPy
2
3 #create a window
4 mywin = visual.Window([800,600], monitor="testMonitor", units="deg")
5
Note: For those new to Python. Did you notice that the grating and the fixation stimuli both call GratingStim
but have different arguments? One of the nice features about python is that you can select which arguments to set.
GratingStim has over 15 arguments that can be set, but the others just take on default values if they aren’t needed.
That’s a bit easy though. Let’s make the stimulus move, at least! To do that we need to create a loop where we change
the phase (or orientation, or position. . . ) of the stimulus and then redraw. Add this code in place of the drawing code
above:
That ran for 200 frames (and then waited 5 seconds as well). Maybe it would be nicer to keep updating until the user
hits a key instead. That’s easy to add too. In the first line add event to the list of modules you’ll import. Then replace
the line:
86 Chapter 6. Coder
PsychoPy - Psychology software for Python, Release 3.2.0
Then, within the loop (make sure it has the same indentation as the other lines) add the lines:
if len(event.getKeys())>0:
break
event.clearEvents()
the first line counts how many keys have been pressed since the last frame. If more than zero are found then we break
out of the never-ending loop. The second line clears the event buffer and should always be called after you’ve collected
the events you want (otherwise it gets full of events that we don’t care about like the mouse moving around etc. . . ).
Your finished script should look something like this:
1 from psychopy import visual, core, event #import some libraries from PsychoPy
2
3 #create a window
4 mywin = visual.Window([800,600],monitor="testMonitor", units="deg")
5
17 if len(event.getKeys())>0:
18 break
19 event.clearEvents()
20
21 #cleanup
22 mywin.close()
23 core.quit()
There are several more simple scripts like this in the demos menu of the Coder and Builder views and many more to
download. If you’re feeling like something bigger then go to Tutorial 2: Measuring a JND using a staircase procedure
which will show you how to build an actual experiment.
This tutorial builds an experiment to test your just-noticeable-difference (JND) to orientation, that is it determines
the smallest angular deviation that is needed for you to detect that a gabor stimulus isn’t vertical (or at some other
reference orientation). The method presents a pair of stimuli at once with the observer having to report with a key
press whether the left or the right stimulus was at the reference orientation (e.g. vertical).
You can download the full code here. Note that the entire experiment is constructed of less than 100 lines of code,
including the initial presentation of a dialogue for parameters, generation and presentation of stimuli, running the
trials, saving data and outputting a simple summary analysis for feedback. Not bad, eh?
There are a great many modifications that can be made to this code, however this example is designed to demonstrate
how much can be achieved with very simple code. Modifying existing is an excellent way to begin writing your own
scripts, for example you may want to try changing the appearance of the text or the stimuli.
The first lines of code import the necessary libraries. We need lots of the PsychoPy modules for a full experiment, as
well as numpy (which handles various numerical/mathematical functions):
1 """measure your JND in orientation using a staircase method"""
2 from psychopy import core, visual, gui, data, event
3 from psychopy.tools.filetools import fromFile, toFile
4 import numpy, random
Also note that there are two ways to insert comments in Python (and you should do this often!). Using triple quotes,
as in “”“Here’s my comment”“”, allows you to write a comment that can span several lines. Often you need that
at the start of your script to leave yourself a note about the implementation and history of what you’ve written. For
single-line comments, as you’ll see below, you can use a simple # to indicate that the rest of the line is a comment.
The try:...except:... lines allow us to try and load a parameter file from a previous run of the experiment. If
that fails (e.g. because the experiment has never been run) then create a default set of parameters. These are easy to
store in a python dictionary that we’ll call expInfo:
6 try: # try to get a previous parameters file
7 expInfo = fromFile('lastParams.pickle')
8 except: # if not there then use a default set
9 expInfo = {'observer':'jwp', 'refOrientation':0}
10 expInfo['dateStr'] = data.getDateStr() # add the current time
The last line adds the current date to to the information, whether we loaded from a previous run or created default
values.
So having loaded those parameters, let’s allow the user to change them in a dialogue box (which we’ll call dlg). This
is the simplest form of dialogue, created directly from the dictionary above. the dialogue will be presented immediately
to the user and the script will wait until they hit OK or Cancel.
If they hit OK then dlg.OK=True, in which case we’ll use the updated values and save them straight to a parameters
file (the one we try to load above).
If they hit Cancel then we’ll simply quit the script and not save the values.
11 # present a dialogue to change params
12 dlg = gui.DlgFromDict(expInfo, title='simple JND Exp', fixed=['dateStr'])
13 if dlg.OK:
14 toFile('lastParams.pickle', expInfo) # save params to file for next time
15 else:
16 core.quit() # the user hit cancel so exit
We’ll create a file to which we can output some data as text during each trial (as well as outputting a binary file at the
end of the experiment). PsychoPy actually has supporting functions to do this automatically, but here we’re showing
you the manual way to do it.
We’ll create a filename from the subject+date+”.csv” (note how easy it is to concatenate strings in python just by
‘adding’ them). csv files can be opened in most spreadsheet packages. Having opened a text file for writing, the last
line shows how easy it is to send text to this target document.
88 Chapter 6. Coder
PsychoPy - Psychology software for Python, Release 3.2.0
21 dataFile.write('targetSide,oriIncrement,correct\n')
PsychoPy allows us to set up an object to handle the presentation of stimuli in a staircase procedure, the
StairHandler. This will define the increment of the orientation (i.e. how far it is from the reference orienta-
tion). The staircase can be configured in many ways, but we’ll set it up to begin with an increment of 20deg (very
detectable) and home in on the 80% threshold value. We’ll step up our increment every time the subject gets a wrong
answer and step down if they get three right answers in a row. The step size will also decrease after every 2 reversals,
starting with an 8dB step (large) and going down to 1dB steps (smallish). We’ll finish after 50 trials.
Now we need to create a window, some stimuli and timers. We need a ~psychopy.visual.Window in which to draw our
stimuli, a fixation point and two ~psychopy.visual.GratingStim stimuli (one for the target probe and one as the foil).
We can have as many timers as we like and reset them at any time during the experiment, but I generally use one to
measure the time since the experiment started and another that I reset at the beginning of each trial.
Once the stimuli are created we should give the subject a message asking if they’re ready. The next two lines create a
pair of messages, then draw them into the screen and then update the screen to show what we’ve drawn. Finally we
issue the command event.waitKeys() which will wait for a keypress before continuing.
46 message1.draw()
47 message2.draw()
48 fixation.draw()
49 win.flip()#to show our newly drawn 'stimuli'
50 #pause until there's a keypress
51 event.waitKeys()
OK, so we have everything that we need to run the experiment. The following uses a for-loop that will iterate over
trials in the experiment. With each pass through the loop the staircase object will provide the new value for the
intensity (which we will call thisIncrement). We will randomly choose a side to present the target stimulus using
numpy.random.random(), setting the position of the target to be there and the foil to be on the other side of the
fixation point.
Then set the orientation of the foil to be the reference orientation plus thisIncrement, draw all the stimuli (in-
cluding the fixation point) and update the window.
Wait for presentation time of 500ms and then blank the screen (by updating the screen after drawing just the fixation
point).
68 # wait 500ms; but use a loop of x frames for more accurate timing
69 core.wait(0.5)
(This is not the most precise way to time your stimuli - you’ll probably overshoot by one frame - but its easy to
understand. PsychoPy allows you to present a stimulus for acertian number of screen refreshes instead which is better
for short stimuli.)
Still within the for-loop (note the level of indentation is the same) we need to get the response from the subject. The
method works by starting off assuming that there hasn’t yet been a response and then waiting for a key press. For
each key pressed we check if the answer was correct or incorrect and assign the response appropriately, which ends
the trial. We always have to clear the event buffer if we’re checking for key presses like this
75 # get response
76 thisResp=None
77 while thisResp==None:
78 allKeys=event.waitKeys()
79 for thisKey in allKeys:
80 if thisKey=='left':
81 if targetSide==-1: thisResp = 1 # correct
82 else: thisResp = -1 # incorrect
83 elif thisKey=='right':
84 if targetSide== 1: thisResp = 1 # correct
85 else: thisResp = -1 # incorrect
86 elif thisKey in ['q', 'escape']:
(continues on next page)
90 Chapter 6. Coder
PsychoPy - Psychology software for Python, Release 3.2.0
Now we must tell the staircase the result of this trial with its addData() method. Then it can work out whether the
next trial is an increment or decrement. Also, on each trial (so still within the for-loop) we may as well save the data
as a line of text in that .csv file we created earlier.
90 # add the data to the staircase so it can calculate the next level
91 staircase.addData(thisResp)
92 dataFile.write('%i,%.3f,%i\n' %(targetSide, thisIncrement, thisResp))
93 core.wait(1)
OK! We’re basically done! We’ve reached the end of the for-loop (which occurred because the staircase terminated)
which means the trials are over. The next step is to close the text data file and also save the staircase as a binary file
(by ‘pickling’ the file in Python speak) which maintains a lot more info than we were saving in the text file.
While we’re here, it’s quite nice to give some immediate feedback to the user. Let’s tell them the intensity values at the
all the reversals and give them the mean of the last 6. This is an easy way to get an estimate of the threshold, but we
might be able to do a better job by trying to reconstruct the psychometric function. To give that a try see the staircase
analysis script of Tutorial 3.
Having saved the data you can give your participant some feedback and quit!
99 # give some output to user in the command line in the output window
100 print('reversals:')
101 print(staircase.reversalIntensities)
102 approxThreshold = numpy.average(staircase.reversalIntensities[-6:])
103 print('mean of final 6 reversals = %.3f' % (approxThreshold))
104
110 feedback1.draw()
111 fixation.draw()
112 win.flip()
113 event.waitKeys() # wait for participant to respond
114
115 win.close()
116 core.quit()
You could simply output your data as tab- or comma-separated text files and analyse the data in some spreadsheet
package. But the matplotlib library in Python also allows for very neat and simple creation of publication-quality
plots.
This script shows you how to use a couple of functions from PsychoPy to open some data files (psychopy.
gui.fileOpenDlg()) and create a psychometric function out of some staircase data (psychopy.data.
functionFromStaircase()).
Matplotlib is then used to plot the data.
Note: Matplotlib and pylab. Matplotlib is a python library that has similar command syntax to most of the plotting
functions in Matlab(tm). In can be imported in different ways; the import pylab line at the beginning of the script
is the way to import matploblib as well as a variety of other scientific tools (that aren’t strictly to do with plotting per
se).
44 #plot curve
45 pylab.subplot(122)
(continues on next page)
92 Chapter 6. Coder
PsychoPy - Psychology software for Python, Release 3.2.0
54 pylab.show()
94 Chapter 6. Coder
CHAPTER
SEVEN
In 2016 we wrote a proof-of-principle that PsychoPy could generate studies for online use. In January 2018 we began
a Wellcome Trust grant to develop it fully. This is what we call PsychoPy3 - the 3rd major phase of PsychoPy’s
development.
The key steps to this are basically to:
• export your experiment to JavaScript, ready to run online
• upload it to Pavlovia.org to be launched
• distribute the web address (URL) needed to run the study
Information on how to carry out those steps is below, as well as technical information regarding the precision, about
how the project actually works, and about the status of the work.
95
PsychoPy - Psychology software for Python, Release 3.2.0
To create and log in to your account on Pavlovia, you will need an active internet connection. If you have not created
your account, you can either 1) go to Pavlovia and create your account, or 2) click the login button highlighted in
Figure 1, and create an account through the dialog box. Once you have an account on Pavlovia, check to see that you
are logged in via Builder by clicking button (4) highlighted below, in Figure 1.
Figure 1. PsychoPy 3 Builder icons for building and running online studies
Creating your project repository is your first step to running your experiment from Pavlovia. To create your project,
first make sure that you have an internet connection and are logged in to Pavlovia. Once you are logged in create your
project repository by syncing your project with the server using button (1) in Figure 1.
A dialog box will appear, informing you that your .psyexp file does not belong to an existing project. Click “Create
a project” if you wish to create a project, or click “Cancel” if you wish to return to your experiment in Builder. See
Figure 2.
Figure 2. The dialog that appears when an online project does not exist.
If you clicked the “Create a project” button, another window will appear. This window is designed to collect important
metadata about your project, see Figure 3 below.
Figure 3. Dialog for creating your project on Pavlovia.org
Use this window to add information to store your project on Pavlovia:
Local folder: The (local) project path on your computer. Use the Browse button to find your local directory, if
required.
Description: Describe your experiment – similar to the readme files used for describing PsychoPy experiments.
Tags (comma separated): The tag will be used to filter and search for experiments by key words.
Public: Tick this box if you would like to make your repository public, for anyone to see.
When you have completed all fields in the Project window, click “Create project on Pavlovia” button to push your
experiment up to the online repository. Click “Cancel” if you wish to return to your experiment in Builder.
After you have uploaded your project to Pavlovia via Builder, you can go and have a look at your project online. To
view your project, go to www.pavlovia.org. From the Pavlovia home page, you can explore your own existing projects,
or other users public projects that have been made available to all users. To find your study, click the Explore tab on
the home page (see Figure 4)
If you wish to run your experiment online, in a web-browser, you have two options. You can run your experiment
directly from pavlovia.org, as described above, or you can run your experiment directly from Builder. (There is also
the option to send your experiment URL – more on that later in Recruitment Pools).
To run your experiment on Pavlovia via Builder, you must first ensure you have a valid internet connection, are logged
in, and have created a repository for your project on Pavlovia. Once you have completed these steps, simply click
button (2) in the Builder frame, as shown in Figure 1 above.
If you wish to search for your own existing projects on Pavlovia, or other users public projects, you can do this via
the Builder interface. To search for a project, click button (3) on the Builder Frame in Figure 1. Following this, a
search dialog will appear, see Figure 7. The search dialog presents several options that allow users to search, fork and
synchronize projects.
Figure 7. The search dialog in Builder
To search for a project (see Fig 7, Box A), type in search terms in the text box and click the “Search” button to find
related projects on Pavlovia. Use the search filters (e.g., “My group”, “Public” etc) above the text box to filter the
search output. The output of your search will be listed in the search panel below the search button, where you can
select your project of interest.
To fork and sync a project is to take your own copy of a project from Pavlovia (fork) and copy a version to your
local desktop or laptop computer (sync). To fork a project, select the local folder to download the project using the
“Browse” button, and then click “Sync” when you are ready - (see Fig 7, Box B). You should now have a local copy
of the project from Pavlovia ready to run in PsychoPy!
Now you can run your synced project online from Pavlovia!
Contents:
maxdepth 1
fromBuilder
maxdepth 1
status tech psychojsCode cautions
EIGHT
Contents:
core.checkPygletDuringWait = False
core.wait(sec)
103
PsychoPy - Psychology software for Python, Release 3.2.0
class psychopy.core.Clock
A convenient class to keep track of time in your experiments. You can have as many independent clocks as you
like (e.g. one to time responses, one to keep track of stimuli. . . )
This clock is identical to the MonotonicClock except that it can also be reset to 0 or another value at any
point.
add(t)
Add more time to the clock’s ‘start’ time (t0).
Note that, by adding time to t0, you make the current time appear less. Can have the effect that getTime()
returns a negative number that will gradually count back up to zero.
e.g.:
timer = core.Clock()
timer.add(5)
while timer.getTime()<0:
# do something
reset(newT=0.0)
Reset the time on the clock. With no args time will be set to zero. If a float is received this will be the new
time on the clock
class psychopy.core.CountdownTimer(start=0)
Similar to a Clock except that time counts down from the time of last reset
Typical usage:
timer = core.CountdownTimer(5)
while timer.getTime() > 0: # after 5s will become negative
# do stuff
getTime()
Returns the current time left on this timer in secs (sub-ms precision)
reset(t=None)
Reset the time on the clock. With no args time will be set to zero. If a float is received this will be the new
time on the clock
class psychopy.core.MonotonicClock(start_time=None)
A convenient class to keep track of time in your experiments using a sub-millisecond timer.
Unlike the Clock this cannot be reset to arbitrary times. For this clock t=0 always represents the time that the
clock was created.
Don’t confuse this class with core.monotonicClock which is an instance of it that got created when Psy-
choPy.core was imported. That clock instance is deliberately designed always to return the time since the
start of the study.
Version Notes: This class was added in PsychoPy 1.77.00
getLastResetTime()
Returns the current offset being applied to the high resolution timebase used by Clock.
getTime(applyZero=True)
Returns the current time on this clock in secs (sub-ms precision).
If applying zero then this will be the time since the clock was created (typically the beginning of the script).
If not applying zero then it is whatever the underlying clock uses as its base time but that is system
dependent. e.g. can be time since reboot, time since Unix Epoch etc
fixation.draw()
win.flip()
ISI = StaticPeriod(screenHz=60)
ISI.start(0.5) # start a period of 0.5s
stim.image = 'largeFile.bmp' # could take some time
ISI.complete() # finish the 0.5s, taking into account one 60Hz frame
stim.draw()
win.flip() # the period takes into account the next frame flip
# time should now be at exactly 0.5s later than when ISI.start()
# was called
Parameters
• screenHz – the frame rate of the monitor (leave as None if you don’t want this accounted
for)
• win – if a visual.Window is given then StaticPeriod will also pause/restart frame interval
recording
• name – give this StaticPeriod a name for more informative logging messages
complete()
Completes the period, using up whatever time is remaining with a call to wait()
Returns 1 for success, 0 for fail (the period overran)
start(duration)
Start the period. If this is called a second time, the timer will be reset and starts again
Parameters duration – The duration of the period, in seconds.
8.2.1 Aperture
If shape is a list or numpy array (Nx2) then it will be used directly as the vertices to a ShapeStim
If shape is a filename then it will be used to load and image as a ImageStim. Note that transparent parts
in the image (e.g. in a PNG file) will not be included in the mask shape. The color of the image will be
ignored.
8.2.2 BufferImageStim
Attributes
Details
# draw stim list items & capture (slow; see EXP log for times):
screenshot = visual.BufferImageStim(myWin, stim=stimList)
See coder Demos > stimuli > bufferImageStim.py for a demo, with timing stats.
Author
• 2010 Jeremy Gray, with on-going fixes
Parameters
buffer : the screen buffer to capture from, default is ‘back’ (hidden). ‘front’ is the buffer in
view after win.flip()
rect : a list of edges [left, top, right, bottom] defining a screen rectangle which is the area to
capture from the screen, given in norm units. default is fullscreen: [-1, 1, 1, -1]
stim : a list of item(s) to be drawn to the back buffer (in order). The back buffer is first cleared
(without the win being flip()ed), then stim items are drawn, and finally the buffer (or part of
it) is captured. Each item needs to have its own .draw() method, and have the same window
as win.
interpolate : whether to use interpolation (default = True, generally good, especially if you
change the orientation)
sqPower2 :
• False (default) = use rect for size if OpenGL = 2.1+
Value should be: True or False. You do NOT need to set this on every frame flip!
autoLog
Whether every change in this stimulus should be auto logged.
Value should be: True or False. Set to False if your stimulus is updating frequently (e.g. updating its
position every
frame) and you want to avoid swamping the log file with
messages that aren’t likely to be useful.
clearTextures()
Clear all textures associated with the stimulus.
As of v1.61.00 this is called automatically during garbage collection of your stimulus, so doesn’t need
calling explicitly by the user.
color
Color of the stimulus
Value should be one of:
• string: to specify a Colors by name. Any of the standard html/X11 color names
<http://www.w3schools.com/html/html_colornames.asp> can be used.
• Colors by hex value
• numerically: (scalar or triplet) for DKL, RGB or other Color spaces. For these, operations
are supported.
When color is specified using numbers, it is interpreted with respect to the stimulus’ current colorSpace.
If color is given as a single value (scalar) then this will be applied to all 3 channels.
Examples:: # . . . for whatever stim you have: stim.color = ‘white’ stim.color = ‘RoyalBlue’ # (the case
is actually ignored) stim.color = ‘#DDA0DD’ # DDA0DD is hexadecimal for plum stim.color = [1.0,
-1.0, -1.0] # if stim.colorSpace=’rgb’:
# a red color in rgb space
stim.color = [0.0, 45.0, 1.0] # if stim.colorSpace=’dkl’: # DKL space with elev=0, azimuth=45
stim.color = [0, 0, 255] # if stim.colorSpace=’rgb255’: # a blue stimulus using rgb255 space
stim.color = 255 # interpreted as (255, 255, 255) # which is white in rgb255.
Operations work as normal for all numeric colorSpaces (e.g. ‘rgb’, ‘hsv’ and ‘rgb255’) but not for strings,
like named and hex. For example, assuming that colorSpace=’rgb’:
You can use setColor if you want to set color and colorSpace in one line. These two are equivalent:
colorSpace
The name of the color space currently being used
Value should be: a string or None
For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used
(defined during initialisation).
Please note that changing colorSpace does not change stimulus parameters. Thus you usually want to
specify colorSpace before setting the color. Example:
# An almost-black text
stim.colorSpace = 'rgb255'
Setting contrast outside range -1 to 1 is permitted, but may produce strange results if color values exceeds
the monitor limits.:
depth
DEPRECATED. Depth is now controlled simply by drawing order.
draw(win=None)
Draws the BufferImage on the screen, similar to ImageStim .draw(). Allows dynamic position,
size, rotation, mirroring, and opacity. Limitations / bugs: not sure what happens with shaders and
self._updateList()
flipHoriz
If set to True then the image will be flipped horizontally (left-to-right). Note that this is relative to the
original image, not relative to the current state.
flipVert
If set to True then the image will be flipped vertically (left-to-right). Note that this is relative to the original
image, not relative to the current state.
image
The image file to be presented (most formats supported).
interpolate
Whether to interpolate (linearly) the texture in the stimulus
If set to False then nearest neighbour will be used when needed, otherwise some form of interpolation will
be used.
mask
The alpha mask that can be used to control the outer shape of the stimulus
• None, ‘circle’, ‘gauss’, ‘raisedCos’
• or the name of an image file (most formats supported)
• or a numpy array (1xN or NxN) ranging -1:1
maskParams
Various types of input. Default to None.
This is used to pass additional parameters to the mask if those are needed.
• For ‘gauss’ mask, pass dict {‘sd’: 5} to control standard deviation.
• For the ‘raisedCos’ mask, pass a dict: {‘fringeWidth’:0.2}, where ‘fringeWidth’ is a parameter
(float, 0-1), determining the proportion of the patch that will be blurred by the raised cosine
edge.
name
String or None. The name of the object to be using during logged messages about this stim. If you have
multiple stimuli in your experiment this really helps to make sense of log files!
If name = None your stimulus will be called “unnamed <type>”, e.g. visual.TextStim(win) will be called
“unnamed TextStim” in the logs.
opacity
Determines how visible the stimulus is relative to background
The value should be a single float ranging 1.0 (opaque) to 0.0 (transparent). Operations are supported.
Precisely how this is used depends on the Blend Mode.
ori
The orientation of the stimulus (in degrees).
Tip: If you need the position of stim in pixels, you can obtain it like this:
from psychopy.tools.monitorunittools import posToPix posPix = posToPix(stim)
setAutoDraw(value, log=None)
Sets autoDraw. Usually you can use ‘stim.attribute = value’ syntax instead, but use this method to suppress
the log message.
setAutoLog(value=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setColor(color, colorSpace=None, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message and/or set colorSpace simultaneously.
setContrast(newContrast, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setDKL(newDKL, operation=”)
DEPRECATED since v1.60.05: Please use the color attribute
setDepth(newDepth, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setFlipHoriz(newVal=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setFlipVert(newVal=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setImage(value, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setLMS(newLMS, operation=”)
DEPRECATED since v1.60.05: Please use the color attribute
setMask(value, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setOpacity(newOpacity, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setOri(newOri, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setPos(newPos, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setRGB(newRGB, operation=”, log=None)
DEPRECATED since v1.60.05: Please use the color attribute
setSize(newSize, operation=”, units=None, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setUseShaders(value=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
size
The size (width, height) of the stimulus in the stimulus units
Value should be x,y-pair, scalar (applies to both dimensions) or None (resets to default). Operations are
supported.
Sizes can be negative (causing a mirror-image reversal) and can extend beyond the window.
Example:
Tip: if you can see the actual pixel range this corresponds to by looking at stim._sizeRendered
texRes
Power-of-two int. Sets the resolution of the mask and texture. texRes is overridden if an array or image is
provided as mask.
Operations supported.
units
None, ‘norm’, ‘cm’, ‘deg’, ‘degFlat’, ‘degFlatPos’, or ‘pix’
If None then the current units of the Window will be used. See Units for the window and stimuli for
explanation of other options.
Note that when you change units, you don’t change the stimulus parameters and it is likely to change
appearance. Example:
# This stimulus is 20% wide and 50% tall with respect to window
stim = visual.PatchStim(win, units='norm', size=(0.2, 0.5)
useShaders
Should shaders be used to render the stimulus (typically leave as True)
If the system support the use of OpenGL shader language then leaving this set to True is highly recom-
mended. If shaders cannot be used then various operations will be slower (notably, changes to stimulus
color or contrast)
verticesPix
This determines the coordinates of the vertices for the current stimulus in pixels, accounting for size, ori,
pos and units
win
The Window object in which the stimulus will be rendered by default. (required)
Example, drawing same stimulus in two different windows and display simultaneously. Assuming that
you have two windows and a stimulus (win1, win2 and stim):
stim.draw(win1)
stim.draw(win2)
8.2.3 Circle
mro() → list
return a type’s method resolution order
8.2.4 CustomMouse
8.2.5 DotStim
8.2.6 ElementArrayStim
8.2.7 GratingStim
Attributes
GratingStim(win[, tex, mask, units, pos, . . . ]) Stimulus object for drawing arbitrary bitmaps that can
repeat (cycle) in either dimension.
GratingStim.win The Window object in which the
GratingStim.tex Texture to used on the stimulus as a grating (aka carrier)
GratingStim.mask The alpha mask (forming the shape of the image)
GratingStim.units None, ‘norm’, ‘cm’, ‘deg’, ‘degFlat’, ‘degFlatPos’, or
‘pix’
GratingStim.sf Spatial frequency of the grating texture
GratingStim.pos The position of the center of the stimulus in the stimulus
units
GratingStim.ori The orientation of the stimulus (in degrees).
GratingStim.size The size (width, height) of the stimulus in the stimulus
units
GratingStim.contrast A value that is simply multiplied by the color
GratingStim.color Color of the stimulus
GratingStim.colorSpace The name of the color space currently being used
GratingStim.opacity Determines how visible the stimulus is relative to back-
ground
GratingStim.interpolate Whether to interpolate (linearly) the texture in the stim-
ulus
GratingStim.texRes Power-of-two int.
GratingStim.name String or None.
GratingStim.autoLog Whether every change in this stimulus should be auto
logged.
GratingStim.draw([win]) Draw the stimulus in its relevant window.
GratingStim.autoDraw Determines whether the stimulus should be automati-
cally drawn on every frame flip.
Details
Examples:
A GratingStim can be rotated scaled and shifted in position, its texture can be drifted in X and/or Y and it can
have a spatial frequency in X and/or Y (for an image file that simply draws multiple copies in the patch).
Also since transparency can be controlled two GratingStims can combine e.g. to form a plaid.
Using GratingStim with images from disk (jpg, tif, png, . . . )
Ideally texture images to be rendered should be square with ‘power-of-2’ dimensions e.g. 16 x 16, 128 x 128.
Any image that is not will be upscaled (with linear interpolation) to the nearest such texture by PsychoPy. The
size of the stimulus should be specified in the normal way using the appropriate units (deg, pix, cm, . . . ). Be
sure to get the aspect ratio the same as the image (if you don’t want it stretched!).
_calcPosRendered()
DEPRECATED in 1.80.00. This functionality is now handled by _updateVertices() and verticesPix.
_calcSizeRendered()
DEPRECATED in 1.80.00. This functionality is now handled by _updateVertices() and verticesPix
_createTexture(tex, id, pixFormat, stim, res=128, maskParams=None, forcePOW2=True,
dataType=None, wrapping=True)
Params
id: is the texture ID
pixFormat: GL.GL_ALPHA, GL.GL_RGB
useShaders: bool
interpolate: bool (determines whether texture will use GL_LINEAR or GL_NEAREST
res: the resolution of the texture (unless a bitmap image is used)
dataType: None, GL.GL_UNSIGNED_BYTE, GL_FLOAT. Only affects image files
(numpy arrays will be float)
For grating stimuli (anything that needs multiple cycles) forcePOW2 should be set to be True. Otherwise
the wrapping of the texture will not work.
_getDesiredRGB(rgb, colorSpace, contrast)
Convert color to RGB while adding contrast. Requires self.rgb, self.colorSpace and self.contrast
_getPolyAsRendered()
DEPRECATED. Return a list of vertices as rendered.
_selectWindow(win)
Switch drawing to the specified window. Calls the window’s _setCurrent() method which handles the
switch.
_set(attrib, val, op=”, log=None)
DEPRECATED since 1.80.04 + 1. Use setAttribute() and val2array() instead.
_updateList()
The user shouldn’t need this method since it gets called after every call to .set() Chooses between using
and not using shaders each call.
_updateListNoShaders()
The user shouldn’t need this method since it gets called after every call to .set() Basically it updates the
OpenGL representation of your stimulus if some parameter of the stimulus changes. Call it if you change
a property manually rather than using the .set() command
_updateListShaders()
The user shouldn’t need this method since it gets called after every call to .set() Basically it updates the
OpenGL representation of your stimulus if some parameter of the stimulus changes. Call it if you change
a property manually rather than using the .set() command
_updateVertices()
Sets Stim.verticesPix and ._borderPix from pos, size, ori, flipVert, flipHoriz
autoDraw
Determines whether the stimulus should be automatically drawn on every frame flip.
Value should be: True or False. You do NOT need to set this on every frame flip!
autoLog
Whether every change in this stimulus should be auto logged.
Value should be: True or False. Set to False if your stimulus is updating frequently (e.g. updating its
position every
frame) and you want to avoid swamping the log file with
messages that aren’t likely to be useful.
blendmode
The OpenGL mode in which the stimulus is draw
Can the ‘avg’ or ‘add’. Average (avg) places the new stimulus over the old one with a transparency given by
its opacity. Opaque stimuli will hide other stimuli transparent stimuli won’t. Add performs the arithmetic
sum of the new stimulus and the ones already present.
clearTextures()
Clear all textures associated with the stimulus.
As of v1.61.00 this is called automatically during garbage collection of your stimulus, so doesn’t need
calling explicitly by the user.
color
Color of the stimulus
Value should be one of:
• string: to specify a Colors by name. Any of the standard html/X11 color names
<http://www.w3schools.com/html/html_colornames.asp> can be used.
• Colors by hex value
• numerically: (scalar or triplet) for DKL, RGB or other Color spaces. For these, operations
are supported.
When color is specified using numbers, it is interpreted with respect to the stimulus’ current colorSpace.
If color is given as a single value (scalar) then this will be applied to all 3 channels.
Examples:: # . . . for whatever stim you have: stim.color = ‘white’ stim.color = ‘RoyalBlue’ # (the case
is actually ignored) stim.color = ‘#DDA0DD’ # DDA0DD is hexadecimal for plum stim.color = [1.0,
-1.0, -1.0] # if stim.colorSpace=’rgb’:
# a red color in rgb space
stim.color = [0.0, 45.0, 1.0] # if stim.colorSpace=’dkl’: # DKL space with elev=0, azimuth=45
stim.color = [0, 0, 255] # if stim.colorSpace=’rgb255’: # a blue stimulus using rgb255 space
Operations work as normal for all numeric colorSpaces (e.g. ‘rgb’, ‘hsv’ and ‘rgb255’) but not for strings,
like named and hex. For example, assuming that colorSpace=’rgb’:
You can use setColor if you want to set color and colorSpace in one line. These two are equivalent:
colorSpace
The name of the color space currently being used
Value should be: a string or None
For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used
(defined during initialisation).
Please note that changing colorSpace does not change stimulus parameters. Thus you usually want to
specify colorSpace before setting the color. Example:
# An almost-black text
stim.colorSpace = 'rgb255'
Value should be: a float between -1 (negative) and 1 (unchanged). Operations supported.
Set the contrast of the stimulus, i.e. scales how far the stimulus deviates from the middle grey. You can
also use the stimulus opacity to control contrast, but that cannot be negative.
Examples:
Setting contrast outside range -1 to 1 is permitted, but may produce strange results if color values exceeds
the monitor limits.:
depth
DEPRECATED. Depth is now controlled simply by drawing order.
draw(win=None)
Draw the stimulus in its relevant window. You must call this method after every MyWin.flip() if you want
the stimulus to appear on that frame and then update the screen again.
interpolate
Whether to interpolate (linearly) the texture in the stimulus
If set to False then nearest neighbour will be used when needed, otherwise some form of interpolation will
be used.
mask
The alpha mask (forming the shape of the image)
This can be one of various options:
• ‘circle’, ‘gauss’, ‘raisedCos’, ‘cross’
• None (resets to default)
• the name of an image file (most formats supported)
• a numpy array (1xN or NxN) ranging -1:1
maskParams
Various types of input. Default to None.
This is used to pass additional parameters to the mask if those are needed.
• For ‘gauss’ mask, pass dict {‘sd’: 5} to control standard deviation.
• For the ‘raisedCos’ mask, pass a dict: {‘fringeWidth’:0.2}, where ‘fringeWidth’ is a parameter
(float, 0-1), determining the proportion of the patch that will be blurred by the raised cosine
edge.
name
String or None. The name of the object to be using during logged messages about this stim. If you have
multiple stimuli in your experiment this really helps to make sense of log files!
If name = None your stimulus will be called “unnamed <type>”, e.g. visual.TextStim(win) will be called
“unnamed TextStim” in the logs.
opacity
Determines how visible the stimulus is relative to background
The value should be a single float ranging 1.0 (opaque) to 0.0 (transparent). Operations are supported.
Precisely how this is used depends on the Blend Mode.
ori
The orientation of the stimulus (in degrees).
Should be a single value (scalar). Operations are supported.
Orientation convention is like a clock: 0 is vertical, and positive values rotate clockwise. Beyond 360 and
below zero values wrap appropriately.
overlaps(polygon)
Returns True if this stimulus intersects another one.
If polygon is another stimulus instance, then the vertices and location of that stimulus will be used as the
polygon. Overlap detection is typically very good, but it can fail with very pointy shapes in a crossed-
swords configuration.
Note that, if your stimulus uses a mask (such as a Gaussian blob) then this is not accounted for by the
overlaps method; the extent of the stimulus is determined purely by the size, pos, and orientation settings
(and by the vertices for shape stimuli).
See coder demo, shapeContains.py
phase
Phase of the stimulus in each dimension of the texture.
Should be an x,y-pair or scalar
NB phase has modulus 1 (rather than 360 or 2*pi) This is a little unconventional but has the nice effect
that setting phase=t*n drifts a stimulus at n Hz
pos
The position of the center of the stimulus in the stimulus units
value should be an x,y-pair. Operations are also supported.
Example:
Tip: If you need the position of stim in pixels, you can obtain it like this:
from psychopy.tools.monitorunittools import posToPix posPix = posToPix(stim)
setAutoDraw(value, log=None)
Sets autoDraw. Usually you can use ‘stim.attribute = value’ syntax instead, but use this method to suppress
the log message.
setAutoLog(value=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setBlendmode(value, log=None)
DEPRECATED. Use ‘stim.parameter = value’ syntax instead
If units == ‘norm’ then sf units are in cycles per stimulus (and so SF scales with stimulus size).
If texture is an image loaded from a file then sf=None defaults to 1/stimSize to give one cycle of the
image.
size
The size (width, height) of the stimulus in the stimulus units
Value should be x,y-pair, scalar (applies to both dimensions) or None (resets to default). Operations are
supported.
Sizes can be negative (causing a mirror-image reversal) and can extend beyond the window.
Example:
Tip: if you can see the actual pixel range this corresponds to by looking at stim._sizeRendered
tex
Texture to used on the stimulus as a grating (aka carrier)
This can be one of various options:
• ‘sin’,’sqr’, ‘saw’, ‘tri’, None (resets to default)
• the name of an image file (most formats supported)
• a numpy array (1xN or NxN) ranging -1:1
If specifying your own texture using an image or numpy array you should ensure that the image has square
power-of-two dimesnions (e.g. 256 x 256). If not then PsychoPy will upsample your stimulus to the next
larger power of two.
texRes
Power-of-two int. Sets the resolution of the mask and texture. texRes is overridden if an array or image is
provided as mask.
Operations supported.
units
None, ‘norm’, ‘cm’, ‘deg’, ‘degFlat’, ‘degFlatPos’, or ‘pix’
If None then the current units of the Window will be used. See Units for the window and stimuli for
explanation of other options.
Note that when you change units, you don’t change the stimulus parameters and it is likely to change
appearance. Example:
# This stimulus is 20% wide and 50% tall with respect to window
stim = visual.PatchStim(win, units='norm', size=(0.2, 0.5)
useShaders
Should shaders be used to render the stimulus (typically leave as True)
If the system support the use of OpenGL shader language then leaving this set to True is highly recom-
mended. If shaders cannot be used then various operations will be slower (notably, changes to stimulus
color or contrast)
verticesPix
This determines the coordinates of the vertices for the current stimulus in pixels, accounting for size, ori,
pos and units
win
The Window object in which the stimulus will be rendered by default. (required)
Example, drawing same stimulus in two different windows and display simultaneously. Assuming that
you have two windows and a stimulus (win1, win2 and stim):
stim.draw(win1)
stim.draw(win2)
psychopy.visual.helpers.pointInPolygon(x, y, poly)
Determine if a point is inside a polygon; returns True if inside.
(x, y) is the point to test. poly is a list of 3 or more vertices as (x,y) pairs. If given an object, such as a ShapeStim,
will try to use its vertices and position as the polygon.
Same as the .contains() method elsewhere.
psychopy.visual.helpers.polygonsOverlap(poly1, poly2)
Determine if two polygons intersect; can fail for very pointy polygons.
Accepts two polygons, as lists of vertices (x,y) pairs. If given an object with with (vertices + pos), will try to
use that as the polygon.
Checks if any vertex of one polygon is inside the other polygon. Same as the .overlaps() method elsewhere.
Notes
We implement special handling for the Line stimulus as it is not a proper polygon. We do not check for class
instances because this would require importing of visual.Line, creating a circular import. Instead, we assume
that a “polygon” with only two vertices is meant to specify a line. Pixels between the endpoints get interpolated
before testing for overlap.
psychopy.visual.helpers.groupFlipVert(flipList, yReflect=0)
Reverses the vertical mirroring of all items in list flipList.
Reverses the .flipVert status, vertical (y) positions, and angular rotation (.ori). Flipping preserves the relations
among the group’s visual elements. The parameter yReflect is the y-value of an imaginary horizontal line
around which to reflect the items; default = 0 (screen center).
Typical usage is to call once prior to any display; call again to un-flip. Can be called with a list of all stim to be
presented in a given routine.
Will flip a) all psychopy.visual.xyzStim that have a setFlipVert method, b) the y values of .vertices, and c) items
in n x 2 lists that are mutable (i.e., list, np.array, no tuples): [[x1, y1], [x2, y2], . . . ]
8.2.9 ImageStim
As of PsychoPy version 1.79.00 some of the properties for this stimulus can be set using the syntax:
stim.pos = newPos
stim.setImage(newImage)
Attributes
Details
_updateVertices()
Sets Stim.verticesPix and ._borderPix from pos, size, ori, flipVert, flipHoriz
autoDraw
Determines whether the stimulus should be automatically drawn on every frame flip.
Value should be: True or False. You do NOT need to set this on every frame flip!
autoLog
Whether every change in this stimulus should be auto logged.
Value should be: True or False. Set to False if your stimulus is updating frequently (e.g. updating its
position every
frame) and you want to avoid swamping the log file with
messages that aren’t likely to be useful.
clearTextures()
Clear all textures associated with the stimulus.
As of v1.61.00 this is called automatically during garbage collection of your stimulus, so doesn’t need
calling explicitly by the user.
color
Color of the stimulus
Value should be one of:
• string: to specify a Colors by name. Any of the standard html/X11 color names
<http://www.w3schools.com/html/html_colornames.asp> can be used.
• Colors by hex value
• numerically: (scalar or triplet) for DKL, RGB or other Color spaces. For these, operations
are supported.
When color is specified using numbers, it is interpreted with respect to the stimulus’ current colorSpace.
If color is given as a single value (scalar) then this will be applied to all 3 channels.
Examples:: # . . . for whatever stim you have: stim.color = ‘white’ stim.color = ‘RoyalBlue’ # (the case
is actually ignored) stim.color = ‘#DDA0DD’ # DDA0DD is hexadecimal for plum stim.color = [1.0,
-1.0, -1.0] # if stim.colorSpace=’rgb’:
# a red color in rgb space
stim.color = [0.0, 45.0, 1.0] # if stim.colorSpace=’dkl’: # DKL space with elev=0, azimuth=45
stim.color = [0, 0, 255] # if stim.colorSpace=’rgb255’: # a blue stimulus using rgb255 space
stim.color = 255 # interpreted as (255, 255, 255) # which is white in rgb255.
Operations work as normal for all numeric colorSpaces (e.g. ‘rgb’, ‘hsv’ and ‘rgb255’) but not for strings,
like named and hex. For example, assuming that colorSpace=’rgb’:
You can use setColor if you want to set color and colorSpace in one line. These two are equivalent:
colorSpace
The name of the color space currently being used
Value should be: a string or None
For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used
(defined during initialisation).
Please note that changing colorSpace does not change stimulus parameters. Thus you usually want to
specify colorSpace before setting the color. Example:
# An almost-black text
stim.colorSpace = 'rgb255'
Setting contrast outside range -1 to 1 is permitted, but may produce strange results if color values exceeds
the monitor limits.:
depth
DEPRECATED. Depth is now controlled simply by drawing order.
draw(win=None)
Draw.
image
The image file to be presented (most formats supported).
interpolate
Whether to interpolate (linearly) the texture in the stimulus
If set to False then nearest neighbour will be used when needed, otherwise some form of interpolation will
be used.
mask
The alpha mask that can be used to control the outer shape of the stimulus
• None, ‘circle’, ‘gauss’, ‘raisedCos’
• or the name of an image file (most formats supported)
• or a numpy array (1xN or NxN) ranging -1:1
maskParams
Various types of input. Default to None.
This is used to pass additional parameters to the mask if those are needed.
• For ‘gauss’ mask, pass dict {‘sd’: 5} to control standard deviation.
• For the ‘raisedCos’ mask, pass a dict: {‘fringeWidth’:0.2}, where ‘fringeWidth’ is a parameter
(float, 0-1), determining the proportion of the patch that will be blurred by the raised cosine
edge.
name
String or None. The name of the object to be using during logged messages about this stim. If you have
multiple stimuli in your experiment this really helps to make sense of log files!
If name = None your stimulus will be called “unnamed <type>”, e.g. visual.TextStim(win) will be called
“unnamed TextStim” in the logs.
opacity
Determines how visible the stimulus is relative to background
The value should be a single float ranging 1.0 (opaque) to 0.0 (transparent). Operations are supported.
Precisely how this is used depends on the Blend Mode.
ori
The orientation of the stimulus (in degrees).
Should be a single value (scalar). Operations are supported.
Orientation convention is like a clock: 0 is vertical, and positive values rotate clockwise. Beyond 360 and
below zero values wrap appropriately.
overlaps(polygon)
Returns True if this stimulus intersects another one.
If polygon is another stimulus instance, then the vertices and location of that stimulus will be used as the
polygon. Overlap detection is typically very good, but it can fail with very pointy shapes in a crossed-
swords configuration.
Note that, if your stimulus uses a mask (such as a Gaussian blob) then this is not accounted for by the
overlaps method; the extent of the stimulus is determined purely by the size, pos, and orientation settings
(and by the vertices for shape stimuli).
See coder demo, shapeContains.py
pos
The position of the center of the stimulus in the stimulus units
value should be an x,y-pair. Operations are also supported.
Example:
Tip: If you need the position of stim in pixels, you can obtain it like this:
from psychopy.tools.monitorunittools import posToPix posPix = posToPix(stim)
setAutoDraw(value, log=None)
Sets autoDraw. Usually you can use ‘stim.attribute = value’ syntax instead, but use this method to suppress
the log message.
setAutoLog(value=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setColor(color, colorSpace=None, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message and/or set colorSpace simultaneously.
setContrast(newContrast, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setDKL(newDKL, operation=”)
DEPRECATED since v1.60.05: Please use the color attribute
setDepth(newDepth, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setImage(value, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setLMS(newLMS, operation=”)
DEPRECATED since v1.60.05: Please use the color attribute
setMask(value, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
Tip: if you can see the actual pixel range this corresponds to by looking at stim._sizeRendered
texRes
Power-of-two int. Sets the resolution of the mask and texture. texRes is overridden if an array or image is
provided as mask.
Operations supported.
units
None, ‘norm’, ‘cm’, ‘deg’, ‘degFlat’, ‘degFlatPos’, or ‘pix’
If None then the current units of the Window will be used. See Units for the window and stimuli for
explanation of other options.
Note that when you change units, you don’t change the stimulus parameters and it is likely to change
appearance. Example:
# This stimulus is 20% wide and 50% tall with respect to window
stim = visual.PatchStim(win, units='norm', size=(0.2, 0.5)
useShaders
Should shaders be used to render the stimulus (typically leave as True)
If the system support the use of OpenGL shader language then leaving this set to True is highly recom-
mended. If shaders cannot be used then various operations will be slower (notably, changes to stimulus
color or contrast)
verticesPix
This determines the coordinates of the vertices for the current stimulus in pixels, accounting for size, ori,
pos and units
win
The Window object in which the stimulus will be rendered by default. (required)
Example, drawing same stimulus in two different windows and display simultaneously. Assuming that
you have two windows and a stimulus (win1, win2 and stim):
stim.draw(win1)
stim.draw(win2)
8.2.10 Line
8.2.11 MovieStim
Attributes
MovieStim(win[, filename, units, size, pos, . . . ]) A stimulus class for playing movies (mpeg, avi, etc. . . )
in PsychoPy.
MovieStim.win The Window object in which the
MovieStim.units None, ‘norm’, ‘cm’, ‘deg’, ‘degFlat’, ‘degFlatPos’, or
‘pix’
Continued on next page
Details
_calcPosRendered()
DEPRECATED in 1.80.00. This functionality is now handled by _updateVertices() and verticesPix.
_calcSizeRendered()
DEPRECATED in 1.80.00. This functionality is now handled by _updateVertices() and verticesPix
_getPolyAsRendered()
DEPRECATED. Return a list of vertices as rendered.
_selectWindow(win)
Switch drawing to the specified window. Calls the window’s _setCurrent() method which handles the
switch.
_set(attrib, val, op=”, log=None)
DEPRECATED since 1.80.04 + 1. Use setAttribute() and val2array() instead.
_updateList()
The user shouldn’t need this method since it gets called after every call to .set() Chooses between using
and not using shaders each call.
_updateVertices()
Sets Stim.verticesPix and ._borderPix from pos, size, ori, flipVert, flipHoriz
autoDraw
Determines whether the stimulus should be automatically drawn on every frame flip.
Value should be: True or False. You do NOT need to set this on every frame flip!
autoLog
Whether every change in this stimulus should be auto logged.
Value should be: True or False. Set to False if your stimulus is updating frequently (e.g. updating its
position every
frame) and you want to avoid swamping the log file with
messages that aren’t likely to be useful.
contains(x, y=None, units=None)
Returns True if a point x,y is inside the stimulus’ border.
Can accept variety of input options:
• two separate args, x and y
• one arg (list, tuple or array) containing two vals (x,y)
• an object with a getPos() method that returns x,y, such as a Mouse.
Returns True if the point is within the area defined either by its border attribute (if one defined), or its
vertices attribute if there is no .border. This method handles complex shapes, including concavities and
self-crossings.
Note that, if your stimulus uses a mask (such as a Gaussian) then this is not accounted for by the contains
method; the extent of the stimulus is determined purely by the size, position (pos), and orientation (ori)
settings (and by the vertices for shape stimuli).
See Coder demos: shapeContains.py See Coder demos: shapeContains.py
depth
DEPRECATED. Depth is now controlled simply by drawing order.
draw(win=None)
Draw the current frame to a particular visual.Window.
Draw to the default win for this object if not specified. The current position in the movie will be determined
automatically.
This method should be called on every frame that the movie is meant to appear.
loadMovie(filename, log=None)
Load a movie from file
Parameters
filename: string The name of the file, including path if necessary
Brings up a warning if avbin is not found on the computer. After the file is loaded MovieStim.duration is
updated with the movie duration (in seconds).
name
String or None. The name of the object to be using during logged messages about this stim. If you have
multiple stimuli in your experiment this really helps to make sense of log files!
If name = None your stimulus will be called “unnamed <type>”, e.g. visual.TextStim(win) will be called
“unnamed TextStim” in the logs.
opacity
Determines how visible the stimulus is relative to background
The value should be a single float ranging 1.0 (opaque) to 0.0 (transparent). Operations are supported.
Precisely how this is used depends on the Blend Mode.
ori
The orientation of the stimulus (in degrees).
Should be a single value (scalar). Operations are supported.
Orientation convention is like a clock: 0 is vertical, and positive values rotate clockwise. Beyond 360 and
below zero values wrap appropriately.
overlaps(polygon)
Returns True if this stimulus intersects another one.
If polygon is another stimulus instance, then the vertices and location of that stimulus will be used as the
polygon. Overlap detection is typically very good, but it can fail with very pointy shapes in a crossed-
swords configuration.
Note that, if your stimulus uses a mask (such as a Gaussian blob) then this is not accounted for by the
overlaps method; the extent of the stimulus is determined purely by the size, pos, and orientation settings
(and by the vertices for shape stimuli).
See coder demo, shapeContains.py
pause(log=None)
Pause the current point in the movie (sound will stop, current frame will not advance). If play() is called
again both will restart.
play(log=None)
Continue a paused movie from current position.
pos
The position of the center of the stimulus in the stimulus units
value should be an x,y-pair. Operations are also supported.
Example:
Tip: If you need the position of stim in pixels, you can obtain it like this:
from psychopy.tools.monitorunittools import posToPix posPix = posToPix(stim)
seek(timestamp, log=None)
Seek to a particular timestamp in the movie.
NB this does not seem very robust as at version 1.62, may crash!
setAutoDraw(val, log=None)
Add or remove a stimulus from the list of stimuli that will be automatically drawn on each flip
Parameters
• val: True/False True to add the stimulus to the draw list, False to remove it
setAutoLog(value=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setContrast()
Not yet implemented for MovieStim.
setDKL(newDKL, operation=”)
DEPRECATED since v1.60.05: Please use the color attribute
setDepth(newDepth, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setFlipHoriz(newVal=True, log=None)
If set to True then the movie will be flipped horizontally (left-to-right). Note that this is relative to the
original, not relative to the current state.
setFlipVert(newVal=True, log=None)
If set to True then the movie will be flipped vertically (top-to-bottom). Note that this is relative to the
original, not relative to the current state.
setLMS(newLMS, operation=”)
DEPRECATED since v1.60.05: Please use the color attribute
setMovie(filename, log=None)
See ~MovieStim.loadMovie (the functions are identical). This form is provided for syntactic consistency
with other visual stimuli.
setOpacity(newOpacity, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setOri(newOri, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setPos(newPos, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
Tip: if you can see the actual pixel range this corresponds to by looking at stim._sizeRendered
stop(log=None)
Stop the current point in the movie.
The sound will stop, current frame will not advance. Once stopped the movie cannot be restarted - it must
be loaded again. Use pause() if you may need to restart the movie.
units
None, ‘norm’, ‘cm’, ‘deg’, ‘degFlat’, ‘degFlatPos’, or ‘pix’
If None then the current units of the Window will be used. See Units for the window and stimuli for
explanation of other options.
Note that when you change units, you don’t change the stimulus parameters and it is likely to change
appearance. Example:
# This stimulus is 20% wide and 50% tall with respect to window
stim = visual.PatchStim(win, units='norm', size=(0.2, 0.5)
useShaders
Should shaders be used to render the stimulus (typically leave as True)
If the system support the use of OpenGL shader language then leaving this set to True is highly recom-
mended. If shaders cannot be used then various operations will be slower (notably, changes to stimulus
color or contrast)
verticesPix
This determines the coordinates of the vertices for the current stimulus in pixels, accounting for size, ori,
pos and units
win
The Window object in which the stimulus will be rendered by default. (required)
Example, drawing same stimulus in two different windows and display simultaneously. Assuming that
you have two windows and a stimulus (win1, win2 and stim):
stim.draw(win1)
stim.draw(win2)
8.2.12 NoiseStim
Attributes
Details
noiseClip the values in normally distributed noise are divided by noiseClip to limit excessively high or low values
However, values can still go out of range -1 to 1 whih will throw a soft error message high
values of noiseClip are recommended if using ‘Normal’
Gabor, Isotropic - Effectively a dense scattering of Gabor elements with random amplitude and fixed orientation for Gabo
Parameters - noiseBaseSf - centre spatial frequency in the component units. noiseBW - spatial fre-
quency bandwidth full width half height in octaves. ori - center orientation for Gabor noise (works as
for gratingStim so twists the final image at render time). noiseBWO - orientation bandwidth for Gabor
noise full width half height in degrees. noiseOri - alternative center orientation for Gabor which sets
the orientation during the image build rather than at render time.
useful for setting the orientation of a filter to be applied to some other noise type with a
different base orientation.
In practice the desired amplitude spectrum for the noise is built in Fourier space with a random phase
spectrum. DC term is set to zero - ie zero mean.
Filtered - A white noise sample that has been filtered with a low, high or bandpass Butterworth filter. The initial sample ca
The contrast of the noise falls by half its maximum (3dB) at the cutoff frequencies. Parameters - noise-
FilterUpper - upper cutoff frequency - if greater than texRes/2 cycles per image low pass filter used.
noiseFilterLower - Lower cutoff frequency - if zero low pass filter used. noiseFilterOrder - The
order of the filter controls the steepness of the falloff outside the passband is zero no filter is
applied. noiseFractalPower - spectrum = f^noiseFractalPower - determines the spatial frequency
bias of the initial noise sample. 0 = flat spectrum, negative = low frequency bias, positive =
high frequency bias, -1 = fractal or brownian noise. noiseClip - determines clipping values and
rescaling factor such that final rms contrast is close to that requested by contrast parameter while
keeping pixel values in range -1, 1.
White - A short cut to obtain noise with a flat, unfiltered spectrum
noiseClip - determines clipping values and rescaling factor such that final rms contrast is close to
that requested by contrast parameter while keeping pixel values in range -1, 1.
In practice the desired amplitude spectrum is built in the Fourier Domain with a random phase spectrum.
DC term is set to zero - ie zero mean Note despite name the noise contains all grey levels.
Image - A noise sample whose spatial frequency spectrum is taken from the supplied image.
Parameters - noiseImage name of nparray or image file from which to take spectrum - should be same size as largest
imageComponent: ‘Phase’ ransomizes the phase spectrum leaving the amplitude spectrum untouched.
‘Amplitude’ randomizes the amplitude spectrum leaving the phase spectrum untouched - retains
spatial structure of image. ‘Neither’ keeps the image as is - but you can now apply a spatial filter
to the image.
noiseClip - determines clipping values and rescaling factor such that final rms contrast is close to that
requested by contrast parameter while keeping pixel values in range -1, 1.
In practice the desired amplitude spectrum is taken from the image and paired with a random phase spec-
trum. DC term is set to zero - ie zero mean
**filter parameter If the filter parameter = Butterworth then the a spectral filter defined by the filtered noise
parameters will be applied to the other noise types. If the filter parameter = Gabor then the a spectral filter
defined by the Gabor noise parameters will be applied to the other noise types. If the filter parameter = Isotropic
then the a spectral filter defined by the Isotropic noise parameters will be applied to the other noise types.
Updating noise samples and timing The noise is rebuilt at next call of the draw function whenever a parameter
starting ‘noise’ is notionally changed even if the value does not actually change every time. eg. setting a
parameter to update every frame will cause a new noise sample on every frame but see below. A rebuild can
also be forced at any time using the buildNoise() function. The updateNoise() function can be used at any time
to produce a new random saple of noise without doing a full build. ie it is quicker than a full build. Both
buildNoise and updateNoise can be slow for large samples. Samples of Binary, Normal or Uniform noise can
usually be made at frame rate using noiseUpdate. Updating or building other noise types at frame rate may
result in dropped frames. An alternative is to build a large sample of noise at the start of the routien and place it
off the screen then cut a samples out of this at random locations and feed that as a numpy array into the texture
of a visible gratingStim.
Notes on size If units = pix and noiseType = Binary, Normal or Uniform will make noise sample of requested
size. If units = pix and noiseType is Gabor, Isotropic, Filtered, White, Coloured or Image will make square
noise sample with side length equal that of the largest dimetions requested. if units is not pix will make square
noise sample with side length equal to texRes then rescale to present.
Notes on interpolation For pixel based noise interpolation = nearest is usually best. For other noise types linear
is better if the size of the noise sample does not match the final size of the image well.
Notes on frequency Frequencies for cutoffs etc are converted between units for you but can be counter intuitive.
1/size is always 1 cycle per image. For the sf (final spatial frequency) parameter itself 1/size (or None for units
pix) will faithfully represent the image without further scaling.
Filter cuttoff and Gabor/Isotropic base frequencies should not be too high you should aim to keep them below
0.5 c/pixel on the screen. The function will produce an error when it can’t draw the stimulus in the buffer but it
may still be wrong when displayed.
Notes on orientation and phase The ori parameter twists the final image so the samples in noiseType Binary,
Normal or Uniform will no longer be alighned to the sides of the monitor if ori is not a multiple of 90. Most
other noise types look broadly the same for all values of ori but the specific sample shown can be made to rotate
by changing ori. The dominant orientation for Gabor noise is determined by ori at render time, not before.
The phase parameter similarly shifts the sample around within the display window at render time and will not
choose new random phases for the noise sample.
mro() → list
return a type’s method resolution order
8.2.14 Polygon
mro() → list
return a type’s method resolution order
8.2.15 RadialStim
Attributes
RadialStim(win[, tex, mask, units, pos, . . . ]) Stimulus object for drawing radial stimuli.
RadialStim.win The Window object in which the
RadialStim.tex Texture to used on the stimulus as a grating (aka carrier)
RadialStim.mask The alpha mask that forms the shape of the resulting
image.
RadialStim.units None, ‘norm’, ‘cm’, ‘deg’, ‘degFlat’, ‘degFlatPos’, or
‘pix’
RadialStim.pos The position of the center of the stimulus in the stimulus
units
RadialStim.ori The orientation of the stimulus (in degrees).
RadialStim.size The size (width, height) of the stimulus in the stimulus
units
RadialStim.contrast A value that is simply multiplied by the color
RadialStim.color Color of the stimulus
RadialStim.colorSpace The name of the color space currently being used
RadialStim.opacity Determines how visible the stimulus is relative to back-
ground
RadialStim.interpolate Whether to interpolate (linearly) the texture in the stim-
ulus
RadialStim.setAngularCycles(value[, . . . ]) Usually you can use ‘stim.attribute = value’ syntax in-
stead, but use this method if you need to suppress the
log message
RadialStim.setAngularPhase(value[, . . . ]) Usually you can use ‘stim.attribute = value’ syntax in-
stead, but use this method if you need to suppress the
log message
RadialStim.setRadialCycles(value[, . . . ]) Usually you can use ‘stim.attribute = value’ syntax in-
stead, but use this method if you need to suppress the
log message
RadialStim.setRadialPhase(value[, . . . ]) Usually you can use ‘stim.attribute = value’ syntax in-
stead, but use this method if you need to suppress the
log message
RadialStim.name String or None.
RadialStim.autoLog Whether every change in this stimulus should be auto
logged.
RadialStim.draw([win]) Draw the stimulus in its relevant window.
RadialStim.autoDraw Determines whether the stimulus should be automati-
cally drawn on every frame flip.
RadialStim.clearTextures() Clear all textures associated with the stimulus.
Details
_updateEverything()
Internal helper function for angularRes and visibleWedge (and init)
_updateList()
The user shouldn’t need this method since it gets called after every call to .set() Chooses between using
and not using shaders each call.
_updateListNoShaders()
The user shouldn’t need this method since it gets called after every call to .set() Basically it updates the
OpenGL representation of your stimulus if some parameter of the stimulus changes. Call it if you change
a property manually rather than using the .set() command
_updateListShaders()
The user shouldn’t need this method since it gets called after every call to .set() Basically it updates the
OpenGL representation of your stimulus if some parameter of the stimulus changes. Call it if you change
a property manually rather than using the .set() command
_updateMaskCoords()
calculate mask coords
_updateTextureCoords()
calculate texture coordinates if angularCycles or Phase change
_updateVertices()
Sets Stim.verticesPix and ._borderPix from pos, size, ori, flipVert, flipHoriz
_updateVerticesBase()
Update the base vertices if angular resolution changes.
These will be multiplied by the size and rotation matrix before rendering.
angularCycles
Float (but Int is prettiest). Set the number of cycles going around the stimulus. i.e. it controls the number
of ‘spokes’.
Operations supported.
angularPhase
Float. Set the angular phase (like orientation) of the texture (wraps 0-1).
This is akin to setting the orientation of the texture around the stimulus in radians. If possible, it is more
efficient to rotate the stimulus using its ori setting instead.
Operations supported.
angularRes
The number of triangles used to make the sti.
Operations supported.
autoDraw
Determines whether the stimulus should be automatically drawn on every frame flip.
Value should be: True or False. You do NOT need to set this on every frame flip!
autoLog
Whether every change in this stimulus should be auto logged.
Value should be: True or False. Set to False if your stimulus is updating frequently (e.g. updating its
position every
frame) and you want to avoid swamping the log file with
messages that aren’t likely to be useful.
blendmode
The OpenGL mode in which the stimulus is draw
Can the ‘avg’ or ‘add’. Average (avg) places the new stimulus over the old one with a transparency given by
its opacity. Opaque stimuli will hide other stimuli transparent stimuli won’t. Add performs the arithmetic
sum of the new stimulus and the ones already present.
clearTextures()
Clear all textures associated with the stimulus.
As of v1.61.00 this is called automatically during garbage collection of your stimulus, so doesn’t need
calling explicitly by the user.
color
Color of the stimulus
Value should be one of:
• string: to specify a Colors by name. Any of the standard html/X11 color names
<http://www.w3schools.com/html/html_colornames.asp> can be used.
• Colors by hex value
• numerically: (scalar or triplet) for DKL, RGB or other Color spaces. For these, operations
are supported.
When color is specified using numbers, it is interpreted with respect to the stimulus’ current colorSpace.
If color is given as a single value (scalar) then this will be applied to all 3 channels.
Examples:: # . . . for whatever stim you have: stim.color = ‘white’ stim.color = ‘RoyalBlue’ # (the case
is actually ignored) stim.color = ‘#DDA0DD’ # DDA0DD is hexadecimal for plum stim.color = [1.0,
-1.0, -1.0] # if stim.colorSpace=’rgb’:
# a red color in rgb space
stim.color = [0.0, 45.0, 1.0] # if stim.colorSpace=’dkl’: # DKL space with elev=0, azimuth=45
stim.color = [0, 0, 255] # if stim.colorSpace=’rgb255’: # a blue stimulus using rgb255 space
stim.color = 255 # interpreted as (255, 255, 255) # which is white in rgb255.
Operations work as normal for all numeric colorSpaces (e.g. ‘rgb’, ‘hsv’ and ‘rgb255’) but not for strings,
like named and hex. For example, assuming that colorSpace=’rgb’:
stim.color += [1, 1, 1] # increment all guns by 1 value
stim.color *= -1 # multiply the color by -1 (which in this
# space inverts the contrast)
stim.color *= [0.5, 0, 1] # decrease red, remove green, keep blue
You can use setColor if you want to set color and colorSpace in one line. These two are equivalent:
stim.setColor((0, 128, 255), 'rgb255')
# ... is equivalent to
stim.colorSpace = 'rgb255'
stim.color = (0, 128, 255)
colorSpace
The name of the color space currently being used
Value should be: a string or None
For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used
(defined during initialisation).
Please note that changing colorSpace does not change stimulus parameters. Thus you usually want to
specify colorSpace before setting the color. Example:
# An almost-black text
stim.colorSpace = 'rgb255'
Setting contrast outside range -1 to 1 is permitted, but may produce strange results if color values exceeds
the monitor limits.:
depth
DEPRECATED. Depth is now controlled simply by drawing order.
draw(win=None)
Draw the stimulus in its relevant window. You must call this method after every win.flip() if you want the
stimulus to appear on that frame and then update the screen again.
Tip: If you need the position of stim in pixels, you can obtain it like this:
from psychopy.tools.monitorunittools import posToPix posPix = posToPix(stim)
radialCycles
Float (but Int is prettiest). Set the number of texture cycles from centre to periphery, i.e. it controls the
number of ‘rings’.
Operations supported.
radialPhase
Float. Set the radial phase of the texture (wraps 0-1). This is the phase of the texture from the centre to the
perimeter of the stimulus (in radians). Can be used to drift concentric rings out/inwards.
Operations supported.
setAngularCycles(value, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setAngularPhase(value, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setAutoDraw(value, log=None)
Sets autoDraw. Usually you can use ‘stim.attribute = value’ syntax instead, but use this method to suppress
the log message.
setAutoLog(value=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setBlendmode(value, log=None)
DEPRECATED. Use ‘stim.parameter = value’ syntax instead
setColor(color, colorSpace=None, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message and/or set colorSpace simultaneously.
If units == ‘norm’ then sf units are in cycles per stimulus (and so SF scales with stimulus size).
If texture is an image loaded from a file then sf=None defaults to 1/stimSize to give one cycle of the
image.
size
The size (width, height) of the stimulus in the stimulus units
Value should be x,y-pair, scalar (applies to both dimensions) or None (resets to default). Operations are
supported.
Sizes can be negative (causing a mirror-image reversal) and can extend beyond the window.
Example:
Tip: if you can see the actual pixel range this corresponds to by looking at stim._sizeRendered
tex
Texture to used on the stimulus as a grating (aka carrier)
This can be one of various options:
• ‘sin’,’sqr’, ‘saw’, ‘tri’, None (resets to default)
• the name of an image file (most formats supported)
• a numpy array (1xN or NxN) ranging -1:1
If specifying your own texture using an image or numpy array you should ensure that the image has square
power-of-two dimesnions (e.g. 256 x 256). If not then PsychoPy will upsample your stimulus to the next
larger power of two.
texRes
Power-of-two int. Sets the resolution of the mask and texture. texRes is overridden if an array or image is
provided as mask.
Operations supported.
units
None, ‘norm’, ‘cm’, ‘deg’, ‘degFlat’, ‘degFlatPos’, or ‘pix’
If None then the current units of the Window will be used. See Units for the window and stimuli for
explanation of other options.
Note that when you change units, you don’t change the stimulus parameters and it is likely to change
appearance. Example:
# This stimulus is 20% wide and 50% tall with respect to window
stim = visual.PatchStim(win, units='norm', size=(0.2, 0.5)
useShaders
Should shaders be used to render the stimulus (typically leave as True)
If the system support the use of OpenGL shader language then leaving this set to True is highly recom-
mended. If shaders cannot be used then various operations will be slower (notably, changes to stimulus
color or contrast)
verticesPix
This determines the coordinates of the vertices for the current stimulus in pixels, accounting for size, ori,
pos and units
visibleWedge
tuple (start, end) in degrees. Determines visible range.
(0, 360) is full visibility.
Operations supported.
win
The Window object in which the stimulus will be rendered by default. (required)
Example, drawing same stimulus in two different windows and display simultaneously. Assuming that
you have two windows and a stimulus (win1, win2 and stim):
stim.win = win1 # stimulus will be drawn in win1
stim.draw() # stimulus is now drawn to win1
stim.win = win2 # stimulus will be drawn in win2
stim.draw() # it is now drawn in win2
win1.flip(waitBlanking=False) # do not wait for next
# monitor update
win2.flip() # wait for vertical blanking.
stim.draw(win1)
stim.draw(win2)
8.2.16 RatingScale
There are five main elements of a rating scale: the scale (text above the line intended to be a reminder of how
to use the scale), the line (with tick marks), the marker (a moveable visual indicator on the line), the labels (text
below the line that label specific points), and the accept button. The appearance and function of elements can
be customized by the experimenter; it is not possible to orient a rating scale to be vertical. Multiple scales can
be displayed at the same time, and continuous real-time ratings can be obtained from the history.
The Builder RatingScale component gives a restricted set of options, but also allows full control over a Rat-
ingScale via the ‘customize_everything’ field.
A RatingScale instance has no idea what else is on the screen. The experimenter has to draw the item to be rated,
and handle escape to break or quit, if desired. The subject can use the mouse or keys to respond. Direction keys
(left, right) will move the marker in the smallest available increment (e.g., 1/10th of a tick-mark if precision =
10).
Example 1:
A basic 7-point scale:
ratingScale = visual.RatingScale(win)
item = <statement, question, image, movie, ...>
while ratingScale.noResponse:
item.draw()
ratingScale.draw()
win.flip()
rating = ratingScale.getRating()
decisionTime = ratingScale.getRT()
choiceHistory = ratingScale.getHistory()
Example 2:
For fMRI, sometimes only a keyboard can be used. If your response box sends keys 1-4, you could
specify left, right, and accept keys, and not need a mouse:
ratingScale = visual.RatingScale(
win, low=1, high=5, markerStart=4,
leftKeys='1', rightKeys = '2', acceptKeys='4')
Example 3:
Categorical ratings can be obtained using choices:
ratingScale = visual.RatingScale(
win, choices=['agree', 'disagree'],
markerStart=0.5, singleClick=True)
For other examples see Coder Demos -> stimuli -> ratingScale.py.
Authors
• 2010 Jeremy Gray: original code and on-going updates
• 2012 Henrik Singmann: tickMarks, labels, ticksAboveLine
• 2014 Jeremy Gray: multiple API changes (v1.80.00)
Parameters
win : A Window object (required).
choices : A list of items which the subject can choose among. choices takes precedence over
low, high, precision, scale, labels, and tickMarks.
low : Lowest numeric rating (integer), default = 1.
acceptPreText : The text to display before any value has been selected.
acceptText : The text to display in the ‘accept’ button after a value has been selected.
acceptSize : The width of the accept box relative to the default (e.g., 2 is twice as wide).
acceptKeys : A list of keys that are used to accept the current response; default = ‘return’.
leftKeys : A list of keys that each mean “move leftwards”; default = ‘left’.
rightKeys : A list of keys that each mean “move rightwards”; default = ‘right’.
respKeys : A list of keys to use for selecting choices, in the desired order. The first item will
be the left-most choice, the second item will be the next choice, and so on.
skipKeys : List of keys the subject can use to skip a response, default = ‘tab’. To require a
response to every item, set skipKeys=None.
lineColor : The RGB color to use for the scale line, default = ‘White’.
mouseOnly : Require the subject to use the mouse (any keyboard input is ignored), default =
False. Can be used to avoid competing with other objects for keyboard input.
noMouse: Require the subject to use keys to respond; disable and hide the mouse. markerStart
will default to the left end.
minTime : Seconds that must elapse before a response can be accepted, default = 0.4.
maxTime : Seconds after which a response cannot be accepted. If maxTime <= minTime,
there’s no time limit. Default = 0.0 (no time limit).
disappear : Whether the rating scale should vanish after a value is accepted. Can be useful
when showing multiple scales.
flipVert : Whether to mirror-reverse the rating scale in the vertical direction.
8.2.17 Rect
8.2.18 Rift
Attributes
Details
_prepareMonoFrame(clear=True)
Prepare a frame for monoscopic rendering. This is called automatically after ‘startHmdFrame’ if mono-
scopic rendering is enabled.
_resolveMSAA()
Resolve multisample anti-aliasing (MSAA). If MSAA is enabled, drawing operations are diverted to a
special multisample render buffer. Pixel data must be ‘resolved’ by blitting it to the swap chain texture. If
not, the texture will be blank.
NOTE: You cannot perform operations on the default FBO (at frameBuffer) when MSAA is enabled. Any
changes will be over-written when ‘flip’ is called.
Returns
Return type None
_setupFrameBuffer()
Override the default framebuffer init code in window.Window to use the HMD swap chain. The HMD’s
swap texture and render buffer are configured here.
If multisample anti-aliasing (MSAA) is enabled, a secondary render buffer is created. Rendering is diverted
to the multi-sample buffer when drawing, which is then resolved into the HMD’s swap chain texture prior
to committing it to the chain. Consequently, you cannot pass the texture attached to the FBO specified by
frameBuffer until the MSAA buffer is resolved. Doing so will result in a blank texture.
Returns
Return type None
_startHmdFrame()
Prepare to render an HMD frame. This must be called every frame before flipping or setting the view
buffer.
This function will wait until the HMD is ready to begin rendering before continuing. The current frame
texture from the swap chain are pulled from the SDK and made available for binding.
Returns
Return type None
_startOfFlip()
Custom _startOfFlip for HMD rendering. This finalizes the HMD texture before diverting drawing oper-
ations back to the on-screen window. This allows ‘flip()’ to swap the on-screen and HMD buffers when
called. This function always returns True.
Returns
Return type True
_updatePerformanceStats()
Run profiling routines. This just reports if the application drops a frame. Nothing too fancy yet.
_updateProjectionMatrix()
Update or re-calculate projection matrices based on the current render descriptor configuration.
Returns
Return type None
_updateTrackingState()
Update the tracking state and calculate new eye poses.
The absolute display time is updated when called and used when computing new head, eye and hand poses.
Returns
Examples
# check if the ‘Enter’ button on the Oculus remote was released isPressed = getButtons([‘Enter’], ‘remote’,
‘falling’)
getConectedControllers()
Get a list of connected input devices (controllers) managed by the LibOVR runtime. Valid names are
‘xbox’, ‘remote’, ‘left_touch’, ‘right_touch’ and ‘touch’.
Returns List of connected controller names.
Return type list
getHandTriggerValues(controller=’xbox’, deadzone=False)
Get the values of the hand triggers representing the amount they are being displaced.
Parameters
• controller (str) – Name of the controller to get hand trigger values.
• deadzone (bool) – Apply the deadzone to hand trigger values.
Returns Left and right index trigger values.
Return type tuple
getIndexTriggerValues(controller=’xbox’, deadzone=False)
Get the values of the index triggers representing the amount they are being displaced.
Parameters
• controller (str) – Name of the controller to get index trigger values.
• deadzone (bool) – Apply the deadzone to index trigger values.
Returns Left and right index trigger values.
Return type tuple
getThumbstickValues(controller=’xbox’, deadzone=False)
Get a list of tuples containing the displacement values (with deadzone) for each thumbstick on a specified
controller.
Axis displacements are represented in each tuple by a floats ranging from -1.0 (full left/down) to 1.0 (full
right/up). The SDK library pre-filters stick input to apply a dead-zone where 0.0 will be returned if the
sticks return a displacement within -0.2746 to 0.2746. Index 0 of the returned tuple contains the X,Y
displacement values of the left thumbstick, and the right thumbstick values at index 1.
Possible values for ‘controller’ are ‘xbox’ and ‘touch’; the only devices with thumbsticks the SDK man-
ages.
Parameters
• controller (str) – Name of the controller to get thumbstick values.
• deadzone (bool) – Apply the deadzone to thumbstick values.
Returns Left and right, X and Y thumbstick values.
Return type tuple
getTouches(touchNames, edgeTrigger=’continuous’)
Returns True if any buttons are touched using sensors. This feature is used to estimate finger poses and
can be used to read gestures. An example of a possible use case is a pointing task, where responses are
only valid if the user’s index finger is extended away from the index trigger button.
Currently, this feature is only available with the Oculus Touch controllers.
Returns
Return type None
getTrackingOriginType()
Get the current tracking origin type.
Returns
Return type str
hasInputFocus
Check if the application currently has input focus.
Returns
Return type bool
headLocked
Enable/disable head locking.
isHmdMounted
Check if the HMD is mounted on the user’s head.
Returns True if the HMD is being worn, otherwise False.
Return type bool
isHmdPresent
Check if the HMD is present.
Returns True if the HMD is present, otherwise False.
Return type bool
isIndexPointing(hand=’right’)
Check if the user is doing a pointing gesture with the given hand, or if the index finger is not touching the
controller. Only applicable when using Oculus Touch controllers.
Returns
Return type None
isThumbUp(hand=’right’)
Check if the user’s thumb is pointing upwards with a given hand, or if not touching the controller. Only
applicable when using Oculus Touch controllers.
Returns
Return type None
isVisible
Check if the app has focus in the HMD and is visible to the viewer.
Returns True if app has focus and is visible in the HMD, otherwise False.
Return type bool
manufacturer
Get the connected HMD’s manufacturer.
Returns UTF-8 encoded string containing the manufacturer name.
Return type str
multiplyProjectionMatrixGL()
Multiply the current projection matrix obtained from the SDK using glMultMatrixf(). The matrix used
depends on the current eye buffer set by ‘setBuffer()’.
Returns
Parameters maxRange – The maximum range of the ray. Ray testing will fail automatically if
the target is out of range. The ray has infinite length if None is specified.
Returns True if the ray intersects anywhere on the bounding sphere, False in every other condi-
tion.
Return type bool
Examples
# raycast from the head pose to a target headPose = hmd.headPose targetPos = rift.math.ovrVector3f(0.0,
0.0, -5.0) # 5 meters front isLooking = hmd.raycast(headPose, targetPos)
# now with touch controller positions rightHandPose = hmd.getHandPose(1) # 1 = right hand fingerLength
= 0.10 # 10 cm pointing = hmd.raycast(rightHandPose, targetPos, maxRange=fingerLength)
recenterTrackingOrigin()
Recenter the tracking origin.
Returns
Return type None
resolution
Get the HMD’s raster display size.
Returns Width and height in pixels.
Return type tuple (int, int)
serialNumber
Get the connected HMD’s unique serial number. Use this to identify a particular unit if you own many.
Returns UTF-8 encoded string containing the devices serial number.
Return type str
setBuffer(buffer, clear=True)
Set the active stereo draw buffer.
Warning! The window.Window.size property will return the buffer’s dimensions in pixels instead of the
window’s when setBuffer is set to ‘left’ or ‘right’.
Parameters
• buffer (str) – View buffer to divert successive drawing operations to, can be either
‘left’ or ‘right’.
• clear (boolean) – Clear the color, stencil and depth buffer.
Returns
Return type None
setDefaultView(clearDepth=True)
Return to default projection. Call this before drawing PsychoPy’s 2D stimuli after a stereo projection
change.
Note: This only has an effect if using Rift in legacy immediate mode OpenGL mode by setting
~Rift.legacy_opengl=True.
Parameters clearDepth (boolean) – Clear the depth buffer prior after configuring the view
parameters.
Returns
Return type None
setHudMode(mode=’Off’)
setRiftView(clearDepth=True)
Set head-mounted display view. Gets the projection and view matrices from the HMD and applies them.
Note: This only has an effect if using Rift in legacy immediate mode OpenGL mode by setting
~Rift.legacy_opengl=True.
Parameters clearDepth (boolean) – Clear the depth buffer prior after configuring the view
parameters.
Returns
8.2.19 EnvelopeGrating
Attributes
Details
8.2.20 ShapeStim
Attributes
ShapeStim(win[, units, lineWidth, . . . ]) A class for arbitrary shapes defined as lists of vertices
(x,y).
ShapeStim.win The Window object in which the
ShapeStim.units None, ‘norm’, ‘cm’, ‘deg’, ‘degFlat’, ‘degFlatPos’, or
‘pix’
ShapeStim.vertices A list of lists or a numpy array (Nx2) specifying xy po-
sitions of each vertex, relative to the center of the field.
ShapeStim.closeShape True or False Should the last vertex be automatically
connected to the first?
ShapeStim.pos The position of the center of the stimulus in the stimulus
units
ShapeStim.ori The orientation of the stimulus (in degrees).
ShapeStim.size Int/Float or x,y-pair.
ShapeStim.contrast A value that is simply multiplied by the color
ShapeStim.lineColor Sets the color of the shape lines.
ShapeStim.lineColorSpace Sets color space for line color.
Continued on next page
Details
_selectWindow(win)
Switch drawing to the specified window. Calls the window’s _setCurrent() method which handles the
switch.
_set(attrib, val, op=”, log=None)
DEPRECATED since 1.80.04 + 1. Use setAttribute() and val2array() instead.
_tesselate(newVertices)
Set the .vertices and .border to new values, invoking tessellation.
_updateList()
The user shouldn’t need this method since it gets called after every call to .set() Chooses between using
and not using shaders each call.
_updateVertices()
Sets Stim.verticesPix and ._borderPix from pos, size, ori, flipVert, flipHoriz
autoDraw
Determines whether the stimulus should be automatically drawn on every frame flip.
Value should be: True or False. You do NOT need to set this on every frame flip!
autoLog
Whether every change in this stimulus should be auto logged.
Value should be: True or False. Set to False if your stimulus is updating frequently (e.g. updating its
position every
frame) and you want to avoid swamping the log file with
messages that aren’t likely to be useful.
closeShape
True or False Should the last vertex be automatically connected to the first?
If you’re using Polygon, Circle or Rect, closeShape=True is assumed and shouldn’t be changed.
color
colorSpace
The name of the color space currently being used
Value should be: a string or None
For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used
(defined during initialisation).
Please note that changing colorSpace does not change stimulus parameters. Thus you usually want to
specify colorSpace before setting the color. Example:
# An almost-black text
stim.colorSpace = 'rgb255'
Setting contrast outside range -1 to 1 is permitted, but may produce strange results if color values exceeds
the monitor limits.:
depth
DEPRECATED. Depth is now controlled simply by drawing order.
draw(win=None, keepMatrix=False)
Draw the stimulus in the relevant window. You must call this method after every win.flip() if you want the
stimulus to appear on that frame and then update the screen again.
fillColor
Sets the color of the shape fill.
See psychopy.visual.GratingStim.color() for further details of how to use colors.
Note that shapes where some vertices point inwards will usually not ‘fill’ correctly.
fillColorSpace
Sets color space for fill color. See documentation for fillColorSpace
interpolate
True or False If True the edge of the line will be antialiased.
lineColor
Sets the color of the shape lines.
See psychopy.visual.GratingStim.color() for further details of how to use colors.
lineColorSpace
Sets color space for line color. See documentation for lineColorSpace
lineWidth
int or float specifying the line width in pixels
Operations supported.
name
String or None. The name of the object to be using during logged messages about this stim. If you have
multiple stimuli in your experiment this really helps to make sense of log files!
If name = None your stimulus will be called “unnamed <type>”, e.g. visual.TextStim(win) will be called
“unnamed TextStim” in the logs.
opacity
Determines how visible the stimulus is relative to background
The value should be a single float ranging 1.0 (opaque) to 0.0 (transparent). Operations are supported.
Precisely how this is used depends on the Blend Mode.
ori
The orientation of the stimulus (in degrees).
Should be a single value (scalar). Operations are supported.
Orientation convention is like a clock: 0 is vertical, and positive values rotate clockwise. Beyond 360 and
below zero values wrap appropriately.
overlaps(polygon)
Returns True if this stimulus intersects another one.
If polygon is another stimulus instance, then the vertices and location of that stimulus will be used as the
polygon. Overlap detection is typically very good, but it can fail with very pointy shapes in a crossed-
swords configuration.
Note that, if your stimulus uses a mask (such as a Gaussian blob) then this is not accounted for by the
overlaps method; the extent of the stimulus is determined purely by the size, pos, and orientation settings
(and by the vertices for shape stimuli).
See coder demo, shapeContains.py
pos
The position of the center of the stimulus in the stimulus units
value should be an x,y-pair. Operations are also supported.
Example:
Tip: If you need the position of stim in pixels, you can obtain it like this:
from psychopy.tools.monitorunittools import posToPix posPix = posToPix(stim)
setAutoDraw(value, log=None)
Sets autoDraw. Usually you can use ‘stim.attribute = value’ syntax instead, but use this method to suppress
the log message.
setAutoLog(value=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setColor(color, colorSpace=None, operation=”, log=None)
Sets both the line and fill to be the same color
setContrast(newContrast, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setDKL(newDKL, operation=”)
DEPRECATED since v1.60.05: Please use the color attribute
setDepth(newDepth, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setFillColor(color, colorSpace=None, operation=”, log=None)
Sets the color of the shape fill.
See psychopy.visual.GratingStim.color() for further details.
Note that shapes where some vertices point inwards will usually not ‘fill’ correctly.
setFillRGB(value, operation=”)
DEPRECATED since v1.60.05: Please use fillColor()
setLMS(newLMS, operation=”)
DEPRECATED since v1.60.05: Please use the color attribute
setLineColor(color, colorSpace=None, operation=”, log=None)
Sets the color of the shape edge.
See psychopy.visual.GratingStim.color() for further details.
setLineRGB(value, operation=”)
DEPRECATED since v1.60.05: Please use lineColor()
setLineWidth(value, operation=”, log=None)
setOpacity(newOpacity, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setOri(newOri, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setPos(newPos, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setRGB(newRGB, operation=”, log=None)
DEPRECATED since v1.60.05: Please use the color attribute
setSize(value, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setUseShaders(value=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
# This stimulus is 20% wide and 50% tall with respect to window
stim = visual.PatchStim(win, units='norm', size=(0.2, 0.5)
useShaders
Should shaders be used to render the stimulus (typically leave as True)
If the system support the use of OpenGL shader language then leaving this set to True is highly recom-
mended. If shaders cannot be used then various operations will be slower (notably, changes to stimulus
color or contrast)
vertices
A list of lists or a numpy array (Nx2) specifying xy positions of each vertex, relative to the center of the
field.
Assigning to vertices can be slow if there are many vertices.
Operations supported with .setVertices().
verticesPix
This determines the coordinates of the vertices for the current stimulus in pixels, accounting for size, ori,
pos and units
win
The Window object in which the stimulus will be rendered by default. (required)
Example, drawing same stimulus in two different windows and display simultaneously. Assuming that
you have two windows and a stimulus (win1, win2 and stim):
stim.draw(win1)
stim.draw(win2)
8.2.21 SimpleImageStim
8.2.22 Slider
Attributes
Slider(win[, ticks, labels, pos, size, . . . ]) A class for obtaining ratings, e.g., on a 1-to-7 or cate-
gorical scale.
Slider.getRating() Get the current value of rating (or None if no response
yet)
Slider.getRT() Get the RT for most recent rating (or None if no re-
sponse yet)
Slider.markerPos The position on the scale where the marker should be.
Slider.setReadOnly([value, log]) When the rating scale is read only no responses can be
made and the scale contrast is reduced
Slider.contrast Set all elements of the Slider (labels, ticks, line) to a
contrast
Slider.style Sets some predefined styles or use these to create your
own.
Slider.getHistory() Return a list of the subject’s history as (rating, time) tu-
ples.
Slider.getMouseResponses() Instructs the rating scale to check for valid mouse re-
sponses.
Slider.reset() Resets the slider to its starting state (so that it can be
restarted on each trial with a new stimulus)
Details
_granularRating(rating)
Handle granularity for the rating
_setLabelLocs()
Calculates the locations of the line, tickLines and labels from the rating info
_setTickLocs()
Calculates the locations of the line, tickLines and labels from the rating info
color
Color of the line/ticks/labels according to the color space.
contrast
Set all elements of the Slider (labels, ticks, line) to a contrast
Parameters contrast –
draw()
Draw the Slider, with all its constituent elements on this frame
getHistory()
Return a list of the subject’s history as (rating, time) tuples.
The history can be retrieved at any time, allowing for continuous ratings to be obtained in real-time. Both
numerical and categorical choices are stored automatically in the history.
getMouseResponses()
Instructs the rating scale to check for valid mouse responses.
This is usually done during the draw() method but can be done by the user as well at any point in time.
The rating will be returned but will ALSO automatically be set as the current rating response.
While the mouse button is down we will alter self.markerPos but don’t set a value for self.rating until
button comes up
Returns
Return type A rating value or None
getRT()
Get the RT for most recent rating (or None if no response yet)
getRating()
Get the current value of rating (or None if no response yet)
horiz
(readonly) determines from self.size whether the scale is horizontal
knownStyles = ['slider', 'rating', 'radio', 'labels45', 'whiteOnBlack', 'triangleMarker
markerPos
The position on the scale where the marker should be. Note that this does not alter the value of the reported
rating, only its visible display. Also note that this position is in scale units, not in coordinates
pos
Set position of slider
Parameters value (tuple, list) – The new position of slider
rating
The most recent rating from the participant or None. Note that the position of the marker can be set using
current without looking like a change in the marker position
recordRating(rating, rt=None, log=None)
Sets the current rating value
reset()
Resets the slider to its starting state (so that it can be restarted on each trial with a new stimulus)
setReadOnly(value=True, log=None)
When the rating scale is read only no responses can be made and the scale contrast is reduced
Parameters
• value (bool (True)) – The value to which we should set the readOnly flag
• log (bool or None) – Force the autologging to occur or leave as default
size
The size for the scale defines the area taken up by the line and the ticks.
style
Sets some predefined styles or use these to create your own.
If you fancy creating and including your own styles that would be great!
Parameters style (list of strings) – Known styles currently include:
’rating’: the marker is a circle ‘triangleMarker’: the marker is a triangle ‘slider’: looks
more like an application slider control ‘whiteOnBlack’: a sort of color-inverse rating
scale ‘labels45’ the text is rotated by 45 degrees
Styles can be combined in a list e.g. [‘whiteOnBlack’,’labels45’]
8.2.23 TextBox
Attributes
TextBox([window, text, font_name, bold, . . . ]) Similar to the visual.TextStim component, TextBox can
be used to display text within a psychopy window.
Note: The following set______() attributes all have equivalent get______() attributes:
Helper functions:
getFontManager()
FontManager provides a simple API for finding and loading font files (.ttf) via the FreeType lib
The FontManager finds supported font files on the computer and initially creates a dictionary containing
the information about available fonts. This can be used to quickly determine what font family names are
available on the computer and what styles (bold, italic) are supported for each family.
This font information can then be used to create the resources necessary to display text using a given font
family, style, size, color, and dpi.
The FontManager is currently used by the psychopy.visual.TextBox stim type. A user script can access
the FontManager via:
font_mngr=visual.textbox.getFontManager()
Once a font of a given size and dpi has been created; it is cached by the FontManager and can be used by
all TextBox instances created within the experiment.
Details
• Some key word arguments supported by other stimulus types in general, or by TextStim itself, are not
supported by TextBox. See the TextBox class definition for the arguments that are supported.
• When a new font, style, and size are used it takes about 1 second to load and process the font. This is a
one time delay for a given font name, style, and size. After first being loaded, the same font style can be
used or re-applied to multiple TextBox components with no significant delay.
• Auto logging or auto drawing is not currently supported.
^ Times are in msec.usec format. Tested using the textstim_vs_textbox.py demo script provided with the
PsychoPy distribution. Results are dependent on text length, video card, and OS. Displayed results are
based on 120 character string with an average of 24 words. Test computer used Windows 7 64 bit, Psy-
choPy 1.79, with a i7 3.4 Ghz CPU, 8 GB RAM, and NVIDIA 480 GTX 2GB graphics card.
draw()
Draws the TextBox to the back buffer of the graphics card. Then call win.flip() to display the changes
drawn. If draw() is not called prior to a call to win.flip(), the textBox will not be displayed for that retrace.
getAutoLog()
Indicates if changes to textBox attribute values should be logged automatically by PsychoPy. *Currently
not supported by TextBox.
getBackgroundColor()
Get the color used to fill the rectangular area of the TextBox stim. All other graphical elements of the
TextBox are drawn on top of the background.
getBorderColor()
A border can be drawn around the perimeter of the TextBox. This method sets the color of that border.
getBorderWidth()
Get the stroke width of the optional TextBox area outline. This is always given in pixel units.
getColorSpace()
Returns the psychopy color space used when specifying colors for the TextBox. Supported values are:
• ‘rgb’
• ‘rbg255’
• ‘norm’
• hex (implicit)
• html name (implicit)
See the Color Space section of the PsychoPy docs for details.
getDisplayedText()
Return the text that fits within the TextBox and therefore is actually seen. This is equal to:
text_length=len(self.getText()) cols,rows=self.getTextGridShape()
displayed_text=self.getText()[0:min(text_length,rows*cols]
getFontColor()
Return the color used when drawing text glyphs.
getGlyphPositionForTextIndex(char_index)
For the provided char_index, which is the index of one character in the current text being dis-
played by the TextBox ( getDisplayedText() ), return the bounding box position, width, and
height for the associated glyph drawn to the screen. This factors in the glyphs position within the
textgrid cell it is being drawn in, so the returned bounding box is for the actual glyph itself, not
the textgrid cell. For textgrid cell placement information, see the getTextGridCellPlacement()
method.
The glyph position for the given text index is returned as a tuple (x,y,width,height), where x,y is
the top left hand corner of the bounding box.
Special Cases:
• If the index provided is out of bounds for the currently displayed text, None is returned.
• For u’ ‘ (space) characters, the full textgrid cell bounding box is returned.
• For u’
‘ ( new line ) characters,the textgrid cell bounding box is returned, but with the box width set to 0.
getHorzAlign()
Return what textbox x position should be interpreted as. Valid options are ‘left’, ‘center’, or ‘right’ .
getHorzJust()
Return how text should laid out horizontally when the number of columns of each text grid row is greater
than the number needed to display the text for that text row.
getInterpolated()
Returns whether interpolation is enabled for the TextBox when it is drawn. When True,
GL_LINE_SMOOTH and GL_POLYGON_SMOOTH are enabled within OpenGL; otherwise they are
disabled.
getLabel()
Return the label / name assigned to the textbox. This does not impact how the stimulus looks when drawn,
and instead is used for internal purposes only.
getLineSpacing()
Return the additional spacing being applied between rows of text. The value is in units specified by the
textbox getUnits() method.
getName()
Same as the GetLabel method.
getOpacity()
Get the default TextBox transparency level used for color related attributes. 0.0 equals fully transparent,
1.0 equals fully opaque.
getPosition()
Return the x,y position of the textbox, in getUnitType() coord space.
getSize()
Return the width,height of the TextBox, using the unit type being used by the stimulus.
getText()
Return the text to display.
getTextGridCellForCharIndex(char_index)
getTextGridCellPlacement()
Returns a 3d numpy array containing position information for each text grid cell in the TextBox. The array
has the shape (num_cols,num_rows,cell_bounds), where num_cols is the number of textgrid columns in
the TextBox. num_rows is the number of textgrid rows in the TextBox. cell_bounds is a 4 element array
containing the (x pos, y pos, width, height) data for the given cell. Position fields are for the top left hand
corner of the cell box. Column and Row indices start at 0.
To get the shape of the textgrid in terms of columns and rows, use:
cell_pos_array=textbox.getTextGridCellPlacement() col_row_count=cell_pos_array.shape[:2]
To access the position, width, and height for textgrid cell at column 0 and row 0 (so the top left cell in the
textgrid):
cell00=cell_pos_array[0,0,:]
For the cell at col 3, row 1 (so 4th cell on second row):
cell41=cell_pos_array[4,1,:]
getTextGridLineColor()
Return the color used when drawing the outline of the text grid cells. Each letter displayed in a TextBox
populates one of the text cells defined by the shape of the TextBox text grid. Color value must be valid for
the color space being used by the TextBox.
are valid. Three element colors use the TextBox getOpacity() value to determine the alpha channel for the
color. Four element colors use the value of the fourth element to set the alpha value for the color.
setHorzAlign(v)
Specify how the horizontal (x) component of the TextBox position is to be interpreted. left = x position is
the left edge, right = x position is the right edge x position, and center = the x position is used to center the
stim horizontally.
setHorzJust(v)
Specify how text within the TextBox should be aligned horizontally. For example, if a text grid has 10
columns, and the text being displayed is 6 characters in length, the horizontal justification determines if
the text should be draw starting at the left of the text columns (left), or should be centered on the columns
(‘center’, in this example there would be two empty text cells to the left and right of the text.), or should
be drawn such that the last letter of text is drawn in the last column of the text row (‘right’).
setInterpolated(interpolate)
Specify whether interpolation should be enabled for the TextBox when it is drawn. When interpolate ==
True, GL_LINE_SMOOTH and GL_POLYGON_SMOOTH are enabled within OpenGL. When interpo-
late is set to False, GL_POLYGON_SMOOTH and GL_LINE_SMOOTH are disabled.
setOpacity(o)
Sets the TextBox transparency level to use for color related attributes of the Textbox. 0.0 equals fully
transparent, 1.0 equals fully opaque.
If opacity is set to None, it is assumed to have a default value of 1.0.
When a color is defined with a 4th element in the colors element list, then this opacity value is ignored and
the alpha value provided in the color itself is used for that TextGrid element instead.
setPosition(pos)
Set the (x,y) position of the TextBox on the Monitor. The position must be given using the unit coord type
used by the stim.
The TextBox position is interpreted differently depending on the Horizontal and Vertical Alignment set-
tings of the stim. See getHorzAlignment() and getVertAlignment() for more information.
For example, if the TextBox alignment is specified as left, top, then the position specifies the top left hand
corner of where the stim will be drawn. An alignment of bottom,right indicates that the position value will
define where the bottom right corner of the TextBox will be drawn. A horz., vert. alignment of center,
center will place the center of the TextBox at pos.
setText(text_source)
Set the text to be displayed within the Textbox.
Note that once a TextBox has been created, the number of character rows and columns is static. To change
the size of a TextBox, a new TextBox stim must be created to replace the current Textbox stim. Therefore
ensure that the textbox is large enough to display the largest length string to be presented in the TextBox.
Characters that do not fit within the TextBox will not be displayed.
Color value must be valid for the color space being used by the TextBox.
setTextGridLineColor(c)
Set the color used when drawing text grid lines. These are lines that can be drawn which mark the bounding
box for each character within the TextBox text grid. Color value must be valid for the color space being
used by the TextBox.
Provide a value of None to disable drawing of textgrid lines.
setTextGridLineWidth(c)
Set the stroke width (in pixels) to use for the text grid character bounding boxes. Border values must be
within the range of stroke widths supported by the OpenGL driver used by the computer graphics card.
Setting the width outside the valid range will result in the stroke width being clamped to the nearest end of
the valid range.
Use the TextBox.getGLineRanges() to access a dict containing some OpenGL parameters which provide
the minimum, maximum, and resolution of valid line widths.
setVertAlign(v)
Specify how the vertical (y) component of the TextBox position is to be interpreted. top = y position is the
top edge, bottom = y position is the bottom edge y position, and center = the y position is used to center
the stim vertically.
setVertJust(v)
Specify how text within the TextBox should be aligned vertically. For example, if a text grid has 3 rows for
text, and the text being displayed all fits on one row, the vertical justification determines if the text should
be draw on the top row of the text grid (top), or should be centered on the rows (‘center’, in this example
there would be one row above and below the row used to draw the text), or should be drawn on the last
row of the text grid, (‘bottom’).
8.2.24 TextStim
Parameters
_calcPosRendered()
DEPRECATED in 1.80.00. This functionality is now handled by _updateVertices() and verticesPix.
_calcSizeRendered()
DEPRECATED in 1.80.00. This functionality is now handled by _updateVertices() and verticesPix
_getDesiredRGB(rgb, colorSpace, contrast)
Convert color to RGB while adding contrast. Requires self.rgb, self.colorSpace and self.contrast
_getPolyAsRendered()
DEPRECATED. Return a list of vertices as rendered.
_selectWindow(win)
Switch drawing to the specified window. Calls the window’s _setCurrent() method which handles the
switch.
_set(attrib, val, op=”, log=None)
DEPRECATED since 1.80.04 + 1. Use setAttribute() and val2array() instead.
_setTextNoShaders(value=None)
Set the text to be rendered using the current font
_setTextShaders(value=None)
Set the text to be rendered using the current font
_updateList()
The user shouldn’t need this method since it gets called after every call to .set() Chooses between using
and not using shaders each call.
_updateListNoShaders()
The user shouldn’t need this method since it gets called after every call to .set() Basically it updates the
OpenGL representation of your stimulus if some parameter of the stimulus changes. Call it if you change
a property manually rather than using the .set() command
_updateListShaders()
Only used with pygame text - pyglet handles all from the draw()
_updateVertices()
Sets Stim.verticesPix and ._borderPix from pos, size, ori, flipVert, flipHoriz
alignHoriz
The horizontal alignment (‘left’, ‘right’ or ‘center’)
alignVert
The vertical alignment (‘top’, ‘bottom’ or ‘center’) Note that this will not necessarily center the particular
characters you choose to draw. Instead, it likely centers the invisible bounding box (which is often spanned
by the top of a ‘T’ to the bottom of a ‘y’) at the pos.
antialias
Allow antialiasing the text (True or False). Sets text, slow.
autoDraw
Determines whether the stimulus should be automatically drawn on every frame flip.
Value should be: True or False. You do NOT need to set this on every frame flip!
autoLog
Whether every change in this stimulus should be auto logged.
Value should be: True or False. Set to False if your stimulus is updating frequently (e.g. updating its
position every
frame) and you want to avoid swamping the log file with
messages that aren’t likely to be useful.
bold
Make the text bold (True, False) (better to use a bold font name).
boundingBox
(read only) attribute representing the bounding box of the text (w,h). This differs from width in that the
width represents the width of the margins, which might differ from the width of the text within them.
NOTE: currently always returns the size in pixels (this will change to return in stimulus units)
color
Color of the stimulus
Value should be one of:
• string: to specify a Colors by name. Any of the standard html/X11 color names
<http://www.w3schools.com/html/html_colornames.asp> can be used.
• Colors by hex value
• numerically: (scalar or triplet) for DKL, RGB or other Color spaces. For these, operations
are supported.
When color is specified using numbers, it is interpreted with respect to the stimulus’ current colorSpace.
If color is given as a single value (scalar) then this will be applied to all 3 channels.
Examples:: # . . . for whatever stim you have: stim.color = ‘white’ stim.color = ‘RoyalBlue’ # (the case
is actually ignored) stim.color = ‘#DDA0DD’ # DDA0DD is hexadecimal for plum stim.color = [1.0,
-1.0, -1.0] # if stim.colorSpace=’rgb’:
# a red color in rgb space
stim.color = [0.0, 45.0, 1.0] # if stim.colorSpace=’dkl’: # DKL space with elev=0, azimuth=45
stim.color = [0, 0, 255] # if stim.colorSpace=’rgb255’: # a blue stimulus using rgb255 space
stim.color = 255 # interpreted as (255, 255, 255) # which is white in rgb255.
Operations work as normal for all numeric colorSpaces (e.g. ‘rgb’, ‘hsv’ and ‘rgb255’) but not for strings,
like named and hex. For example, assuming that colorSpace=’rgb’:
You can use setColor if you want to set color and colorSpace in one line. These two are equivalent:
colorSpace
The name of the color space currently being used
Value should be: a string or None
For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used
(defined during initialisation).
Please note that changing colorSpace does not change stimulus parameters. Thus you usually want to
specify colorSpace before setting the color. Example:
# An almost-black text
stim.colorSpace = 'rgb255'
Setting contrast outside range -1 to 1 is permitted, but may produce strange results if color values exceeds
the monitor limits.:
depth
DEPRECATED. Depth is now controlled simply by drawing order.
draw(win=None)
Draw the stimulus in its relevant window. You must call this method after every MyWin.flip() if you want
the stimulus to appear on that frame and then update the screen again.
If win is specified then override the normal window of this stimulus.
flipHoriz
If set to True then the text will be flipped left-to-right. The flip is relative to the original, not relative to the
current state.
flipVert
If set to True then the text will be flipped top-to-bottom. The flip is relative to the original, not relative to
the current state.
font
String. Set the font to be used for text rendering. font should be a string specifying the name of the font
(in system resources).
fontFiles
A list of additional files if the font is not in the standard system location (include the full path).
OBS: fonts are added every time this value is set. Previous are not deleted.
E.g.:
height
The height of the letters (Float/int or None = set default).
Height includes the entire box that surrounds the letters in the font. The width of the letters is then defined
by the font.
Operations supported.
italic
True/False. Make the text italic (better to use a italic font name).
name
String or None. The name of the object to be using during logged messages about this stim. If you have
multiple stimuli in your experiment this really helps to make sense of log files!
If name = None your stimulus will be called “unnamed <type>”, e.g. visual.TextStim(win) will be called
“unnamed TextStim” in the logs.
opacity
Determines how visible the stimulus is relative to background
The value should be a single float ranging 1.0 (opaque) to 0.0 (transparent). Operations are supported.
Precisely how this is used depends on the Blend Mode.
ori
The orientation of the stimulus (in degrees).
Should be a single value (scalar). Operations are supported.
Orientation convention is like a clock: 0 is vertical, and positive values rotate clockwise. Beyond 360 and
below zero values wrap appropriately.
overlaps(polygon)
Returns True if this stimulus intersects another one.
If polygon is another stimulus instance, then the vertices and location of that stimulus will be used as the
polygon. Overlap detection is typically very good, but it can fail with very pointy shapes in a crossed-
swords configuration.
Note that, if your stimulus uses a mask (such as a Gaussian blob) then this is not accounted for by the
overlaps method; the extent of the stimulus is determined purely by the size, pos, and orientation settings
(and by the vertices for shape stimuli).
See coder demo, shapeContains.py
pos
The position of the center of the stimulus in the stimulus units
value should be an x,y-pair. Operations are also supported.
Example:
Tip: If you need the position of stim in pixels, you can obtain it like this:
from psychopy.tools.monitorunittools import posToPix posPix = posToPix(stim)
posPix
This determines the coordinates in pixels of the position for the current stimulus, accounting for pos and
units. This property should automatically update if pos is changed
setAutoDraw(value, log=None)
Sets autoDraw. Usually you can use ‘stim.attribute = value’ syntax instead, but use this method to suppress
the log message.
setAutoLog(value=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setColor(color, colorSpace=None, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message and/or set colorSpace simultaneously.
setContrast(newContrast, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setDKL(newDKL, operation=”)
DEPRECATED since v1.60.05: Please use the color attribute
setDepth(newDepth, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setFlip(direction, log=None)
(used by Builder to simplify the dialog)
setFlipHoriz(newVal=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setFlipVert(newVal=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setFont(font, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setHeight(height, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setLMS(newLMS, operation=”)
DEPRECATED since v1.60.05: Please use the color attribute
setOpacity(newOpacity, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setOri(newOri, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setPos(newPos, operation=”, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setRGB(newRGB, operation=”, log=None)
DEPRECATED since v1.60.05: Please use the color attribute
setSize(newSize, operation=”, units=None, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
setText(text=None, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message.
setUseShaders(value=True, log=None)
Usually you can use ‘stim.attribute = value’ syntax instead, but use this method if you need to suppress the
log message
size
The size (width, height) of the stimulus in the stimulus units
Value should be x,y-pair, scalar (applies to both dimensions) or None (resets to default). Operations are
supported.
Sizes can be negative (causing a mirror-image reversal) and can extend beyond the window.
Example:
Tip: if you can see the actual pixel range this corresponds to by looking at stim._sizeRendered
text
The text to be rendered. Use \n to make new lines.
Issues: May be slow, and pyglet has a memory leak when setting text. For these reasons, this function
checks so that it only updates the text if it has changed. So scripts can safely set the text on every frame,
with no need to check if it has actually altered.
units
None, ‘norm’, ‘cm’, ‘deg’, ‘degFlat’, ‘degFlatPos’, or ‘pix’
If None then the current units of the Window will be used. See Units for the window and stimuli for
explanation of other options.
Note that when you change units, you don’t change the stimulus parameters and it is likely to change
appearance. Example:
# This stimulus is 20% wide and 50% tall with respect to window
stim = visual.PatchStim(win, units='norm', size=(0.2, 0.5)
useShaders
Should shaders be used to render the stimulus (typically leave as True)
If the system support the use of OpenGL shader language then leaving this set to True is highly recom-
mended. If shaders cannot be used then various operations will be slower (notably, changes to stimulus
color or contrast)
verticesPix
This determines the coordinates of the vertices for the current stimulus in pixels, accounting for size, ori,
pos and units
win
The Window object in which the stimulus will be rendered by default. (required)
Example, drawing same stimulus in two different windows and display simultaneously. Assuming that
you have two windows and a stimulus (win1, win2 and stim):
stim.draw(win1)
stim.draw(win2)
wrapWidth
Int/float or None (set default). The width the text should run before wrapping.
Operations supported.
8.2.25 Window
Notes
• Some parameters (e.g. units) can now be given default values in the user/site preferences and these will be
used if None is given here. If you do specify a value here it will take precedence over preferences.
size
array-like(float) – Dimensions of the window’s drawing area/buffer in pixels [w, h].
monitorFramePeriod
float – Refresh rate of the display if checkTiming=True on window instantiation.
applyEyeTransform(clearDepth=True)
Apply the current view and projection matrices.
Matrices specified by attributes viewMatrix and projectionMatrix are applied using ‘immediate
mode’ OpenGL functions. Subsequent drawing operations will be affected until flip() is called.
All transformations in GL_PROJECTION and GL_MODELVIEW matrix stacks will be cleared (set to iden-
tity) prior to applying.
Parameters clearDepth (bool) – Clear the depth buffer. This may be required prior to
rendering 3D objects.
blendMode
Blend mode to use.
callOnFlip(function, *args, **kwargs)
Call a function immediately after the next flip() command.
The first argument should be the function to call, the following args should be used exactly as you would
for your normal call to the function (can use ordered arguments or keyword arguments as normal).
e.g. If you have a function that you would normally call like this:
then you could call callOnFlip() to have the function call synchronized with the frame flip like this:
clearBuffer()
Clear the back buffer (to which you are currently drawing) without flipping the window. Useful if you
want to generate movie sequences from the back buffer without actually taking the time to flip the window.
close()
Close the window (and reset the Bits++ if necess).
color
Set the color of the window.
This command sets the color that the blank screen will have on the next clear operation. As a result it
effectively takes TWO flip() operations to become visible (the first uses the color to create the new
screen, the second presents that screen to the viewer). For this reason, if you want to changed background
color of the window “on the fly”, it might be a better idea to draw a Rect that fills the whole window with
the desired Rect.fillColor attribute. That’ll show up on first flip.
See other stimuli (e.g. GratingStim.color) for more info on the color attribute which essentially
works the same on all PsychoPy stimuli.
See Color spaces for further information about the ways to specify colors and their various implications.
colorSpace
Documentation for colorSpace is in the stimuli.
e.g. GratingStim.colorSpace
Usually used in conjunction with color like this:
See Color spaces for further information about the ways to specify colors and their various implications.
farClip
Distance to the far clipping plane in meters.
flip(clearBuffer=True)
Flip the front and back buffers after drawing everything for your frame. (This replaces the update()
method, better reflecting what is happening underneath).
Parameters clearBuffer (bool, optional) – Clear the draw buffer after flipping. De-
fault is True.
Returns Wall-clock time in seconds the flip completed. Returns None if waitBlanking is
False.
Return type float or None
Notes
• The time returned when waitBlanking is True corresponds to when the graphics driver releases
the draw buffer to accept draw commands again. This time is usually close to the vertical sync signal
of the display.
Examples
win.flip(clearBuffer=True)
win.flip(clearBuffer=False)
fps()
Report the frames per second since the last call to this function (or since the window was created if this is
first call)
fullscr
Set whether fullscreen mode is True or False (not all backends can toggle an open window).
gamma
Set the monitor gamma for linearization.
Warning: Don’t use this if using a Bits++ or Bits#, as it overrides monitor settings.
gammaRamp
Sets the hardware CLUT using a specified 3xN array of floats ranging between 0.0 and 1.0.
Array must have a number of rows equal to 2 ^ max(bpc).
getActualFrameRate(nIdentical=10, nMaxFrames=100, nWarmUpFrames=10, threshold=1)
Measures the actual frames-per-second (FPS) for the screen.
This is done by waiting (for a max of nMaxFrames) until nIdentical frames in a row have identical frame
times (std dev below threshold ms).
Parameters
• nIdentical (int, optional) – The number of consecutive frames that will be
evaluated. Higher –> greater precision. Lower –> faster.
• nMaxFrames (int, optional) – The maximum number of frames to wait for a
matching set of nIdentical.
• nWarmUpFrames (int, optional) – The number of frames to display before start-
ing the test (this is in place to allow the system to settle after opening the Window for the
first time.
• threshold (int, optional) – The threshold for the std deviation (in ms) before the
set are considered a match.
Returns Frame rate (FPS) in seconds. If there is no such sequence of identical frames a warning
is logged and None will be returned.
Return type float or None
getFutureFlipTime(targetTime=0, clock=None)
The expected time of the next screen refresh. This is currently calculated as win._lastFrameTime + re-
freshInterval
Parameters
• targetTime (float) – The delay from now for which you want the flip time. 0 will
give the because that the earliest we can achieve. 0.15 will give the schedule flip time that
gets as close to 150 ms as possible
• clock (None, 'ptb' or any Clock object) – If True then the time returned
is compatible with ptb.GetSecs()
getMovieFrame(buffer=’front’)
Capture the current Window as an image.
Saves to stack for saveMovieFrames(). As of v1.81.00 this also returns the frame as a PIL image
This can be done at any time (usually after a flip() command).
Frames are stored in memory until a saveMovieFrames() command is issued. You can issue
getMovieFrame() as often as you like and then save them all in one go when finished.
The back buffer will return the frame that hasn’t yet been ‘flipped’ to be visible on screen but has the
advantage that the mouse and any other overlapping windows won’t get in the way.
The default front buffer is to be called immediately after a flip() and gives a complete copy of the
screen at the window’s coordinates.
Parameters buffer (str, optional) – Buffer to capture.
Returns Buffer pixel contents as a PIL/Pillow image object.
Return type Image
getMsPerFrame(nFrames=60, showVisual=False, msg=”, msDelay=0.0)
Assesses the monitor refresh rate (average, median, SD) under current conditions, over at least 60 frames.
Records time for each refresh (frame) for n frames (at least 60), while displaying an optional visual. The
visual is just eye-candy to show that something is happening when assessing many frames. You can also
give it text to display instead of a visual, e.g., msg='(testing refresh rate...)'; setting msg
implies showVisual == False.
To simulate refresh rate under cpu load, you can specify a time to wait within the loop prior to doing the
flip(). If 0 < msDelay < 100, wait for that long in ms.
Returns timing stats (in ms) of:
• average time per frame, for all frames
• standard deviation of all frames
• median, as the average of 12 frame times around the median (~monitor refresh rate)
Author
• 2010 written by Jeremy Gray
win.mouseVisible = False
win.mouseVisible = True
nearClip
Distance to the near clipping plane in meters.
projectionMatrix
Projection matrix defined as a 4x4 numpy array.
recordFrameIntervals
Record time elapsed per frame.
Provides accurate measures of frame intervals to determine whether frames are being dropped. The in-
tervals are the times between calls to flip(). Set to True only during the time-critical parts of the
script. Set this to False while the screen is not being updated, i.e., during any slow, non-frame-time-
critical sections of your code, including inter-trial-intervals, event.waitkeys(), core.wait(), or
image.setImage().
Examples
win.recordFrameIntervals = True
win.saveFrameIntervals()
resetEyeTransform(clearDepth=True)
Restore the default projection and view settings to PsychoPy defaults. Call this prior to drawing 2D stimuli
objects (i.e. GratingStim, ImageStim, Rect, etc.) if any eye transformations were applied for the stimuli to
be drawn correctly.
Notes
• Calling flip() automatically resets the view and projection to defaults. So you don’t need to call
this unless you are mixing views.
saveFrameIntervals(fileName=None, clear=True)
Save recorded screen frame intervals to disk, as comma-separated values.
Parameters
• fileName (None or str) – None or the filename (including path if necessary) in which to
store the data. If None then ‘lastFrameIntervals.log’ will be used.
• clear (bool) – Clear buffer frames intervals were stored after saving. Default is True.
saveMovieFrames(fileName, codec=’libx264’, fps=30, clearFrames=True)
Writes any captured frames to disk.
Will write any format that is understood by PIL (tif, jpg, png, . . . )
Parameters
• filename (str) – Name of file, including path. The extension at the end of the file
determines the type of file(s) created. If an image type (e.g. .png) is given, then multiple
static frames are created. If it is .gif then an animated GIF image is created (although you
will get higher quality GIF by saving PNG files and then combining them in dedicated
image manipulation software, such as GIMP). On Windows and Linux .mpeg files can
be created if pymedia is installed. On macOS .mov files can be created if the pyobjc-
frameworks-QTKit is installed. Unfortunately the libs used for movie generation can be
flaky and poor quality. As for animated GIFs, better results can be achieved by saving as
individual .png frames and then combining them into a movie using software like ffmpeg.
• codec (str, optional) – The codec to be used by moviepy for mp4/mpg/mov files.
If None then the default will depend on file extension. Can be one of libx264, mpeg4
for mp4/mov files. Can be rawvideo, png for avi files (not recommended). Can be
libvorbis for ogv files. Default is libx264.
• fps (int, optional) – The frame rate to be used throughout the movie. Only for
quicktime (.mov) movies.. Default is 30.
• clearFrames (bool, optional) – Set this to False if you want the frames to be
kept for additional calls to saveMovieFrames. Default is True.
Examples
myWin.saveMovieFrames('frame.tif')
setBuffer(buffer, clear=True)
Choose which buffer to draw to (‘left’ or ‘right’).
Requires the Window to be initialised with stereo=True and requires a graphics card that supports quad
buffering (e,g nVidia Quadro series)
PsychoPy always draws to the back buffers, so ‘left’ will use GL_BACK_LEFT This then needs to be
flipped once both eye’s buffers have been rendered.
Parameters
• buffer (str) – Buffer to draw to. Can either be ‘left’ or ‘right’.
• clear (bool, optional) – Clear the buffer before drawing. Default is True.
Examples
setMouseType(name=’arrow’)
Change the appearance of the cursor for this window. Cursor types provide contextual hints about how to
interact with on-screen objects.
The graphics used ‘standard cursors’ provided by the operating system. They may vary in appearance and
hot spot location across platforms. The following names are valid on most platforms:
• arrow : Default pointer.
• ibeam : Indicates text can be edited.
• crosshair : Crosshair with hot-spot at center.
• hand : A pointing hand.
• hresize : Double arrows pointing horizontally.
• vresize : Double arrows pointing vertically.
Parameters name (str) – Type of standard cursor to use (see above). Default is arrow.
Notes
• On Windows the crosshair option is negated with the background color. It will not be visible
when placed over 50% grey fields.
setPerspectiveView(applyTransform=True, **kwargs)
Set the projection and view matrix to render with perspective.
Matrices are computed using values specified in the monitor configuration with the scene origin on the
screen plane. Calculations assume units are in meters.
Note that the values of projectionMatrix and viewMatrix will be replaced when calling this
function.
Parameters
• applyTransform (bool) – Apply transformations after computing them in immediate
mode. Same as calling applyEyeTransform() afterwards.
• **kwargs – Additional arguments for applyEyeTransform().
timeOnFlip(obj, attrib)
Retrieves the time on the next flip and assigns it to the attrib for this obj.
Parameters
• obj (dict or object) – A mutable object (usually a dict of class instance).
• attrib (str) – Key or attribute of obj to assign the flip time to.
Examples
units
None, ‘height’ (of the window), ‘norm’, ‘deg’, ‘cm’, ‘pix’ Defines the default units of stimuli initialized in
the window. I.e. if you change units, already initialized stimuli won’t change their units.
Can be overridden by each stimulus, if units is specified on initialization.
See Units for the window and stimuli for explanation of options.
viewMatrix
View matrix defined as a 4x4 numpy array.
viewPos
The origin of the window onto which stimulus-objects are drawn.
The value should be given in the units defined for the window. NB: Never change a single component (x
or y) of the origin, instead replace the viewPos-attribute in one shot, e.g.:
win.viewPos = [new_xval, new_yval] # This is the way to do it
win.viewPos[0] = new_xval # DO NOT DO THIS! Errors will result.
waitBlanking
After a call to flip() should we wait for the blank before the script continues.
ProjectorFramePacker
class psychopy.visual.windowframepack.ProjectorFramePacker(win)
Class which packs 3 monochrome images per RGB frame.
Allowing 180Hz stimuli with DLP projectors such as TI LightCrafter 4500.
The class overrides methods of the visual.Window class to pack a monochrome image into each RGB channel.
PsychoPy is running at 180Hz. The display device is running at 60Hz. The output projector is producing images
at 180Hz.
Frame packing can work with any projector which can operate in ‘structured light mode’ where each RGB
channel is presented sequentially as a monochrome image. Most home and office projectors cannot operate in
this mode, but projectors designed for machine vision applications typically will offer this feature.
endOfFlip(clearBuffer)
Mask RGB cyclically after each flip. We ignore clearBuffer and just auto-clear after each hardware flip.
startOfFlip()
Return True if all channels of the RGB frame have been filled with monochrome images, and the associated
window should perform a hardware flip
Warper
warpfile [None or filename containing Blender and Paul Bourke] compatible warp definition.
(see http://paulbourke.net/dome/warpingfisheye/)
warpGridsize [300] Defines the resolution of the warp in both X and Y when not using a
warpfile. Typical values would be 64-300 trading off tolerance for jaggies for speed.
eyepoint [[0.5, 0.5] center of the screen] Position of the eye in X and Y as a fraction of the
normalized screen width and height. [0,0] is the bottom left of the screen. [1,1] is the top
right of the screen.
flipHorizontal: True or False Flip the entire output horizontally. Useful for back projection
scenarious.
flipVertical: True or False Flip the entire output vertically. useful if projector is flipped upside
down.
Notes
1. The eye distance from the screen is initialized from the monitor definition.
2. The eye distance can be altered dynamically by changing ‘warper.dist_cm’ and then
calling changeProjection().
Example usage to create a spherical projection:
psychopy.clock.getTime()
Copyright (c) 2018 Mario Kleiner. Licensed under MIT license.
For detailed help on a subfunction SUBFUNCTIONNAME, type GetSecs(‘SUBFUNCTIONNAME?’) ie. the
name with a question mark appended. E.g., for detailed help on the subfunction called Version, type this:
GetSecs(‘Version?’)
[GetSecsTime, WallTime, syncErrorSecs] = GetSecs(‘AllClocks’ [, maxError=0.000020]);
psychopy.clock.getAbsTime()
Return unix time (i.e., whole seconds elapsed since Jan 1, 1970).
This uses the same clock-base as the other timing features, like getTime(). The time (in seconds) ignores the
time-zone (like time.time() on linux). To take the timezone into account, use int(time.mktime(time.gmtime())).
Absolute times in seconds are especially useful to add to generated file names for being unique, informative (=
a meaningful time stamp), and because the resulting files will always sort as expected when sorted in chrono-
logical, alphabetical, or numerical order, regardless of locale and so on.
Version Notes: This method was added in PsychoPy 1.77.00
psychopy.clock.wait(secs, hogCPUperiod=0.2)
Wait for a given time period.
If secs=10 and hogCPU=0.2 then for 9.8s python’s time.sleep function will be used, which is not especially
precise, but allows the cpu to perform housekeeping. In the final hogCPUperiod the more precise method of
constantly polling the clock is used for greater precision.
If you want to obtain key-presses during the wait, be sure to use pyglet and to hogCPU for the entire time, and
then call psychopy.event.getKeys() after calling wait()
If you want to suppress checking for pyglet events during the wait, do this once:
core.checkPygletDuringWait = False
core.wait(sec)
timer = core.Clock()
timer.add(5)
while timer.getTime()<0:
# do something
reset(newT=0.0)
Reset the time on the clock. With no args time will be set to zero. If a float is received this will be the new
time on the clock
class psychopy.clock.CountdownTimer(start=0)
Similar to a Clock except that time counts down from the time of last reset
Typical usage:
timer = core.CountdownTimer(5)
while timer.getTime() > 0: # after 5s will become negative
# do stuff
getTime()
Returns the current time left on this timer in secs (sub-ms precision)
reset(t=None)
Reset the time on the clock. With no args time will be set to zero. If a float is received this will be the new
time on the clock
class psychopy.clock.MonotonicClock(start_time=None)
A convenient class to keep track of time in your experiments using a sub-millisecond timer.
Unlike the Clock this cannot be reset to arbitrary times. For this clock t=0 always represents the time that the
clock was created.
Don’t confuse this class with core.monotonicClock which is an instance of it that got created when Psy-
choPy.core was imported. That clock instance is deliberately designed always to return the time since the
start of the study.
Version Notes: This class was added in PsychoPy 1.77.00
getLastResetTime()
Returns the current offset being applied to the high resolution timebase used by Clock.
getTime(applyZero=True)
Returns the current time on this clock in secs (sub-ms precision).
If applying zero then this will be the time since the clock was created (typically the beginning of the script).
If not applying zero then it is whatever the underlying clock uses as its base time but that is system
dependent. e.g. can be time since reboot, time since Unix Epoch etc
class psychopy.clock.StaticPeriod(screenHz=None, win=None, name=’StaticPeriod’)
A class to help insert a timing period that includes code to be run.
Typical usage:
fixation.draw()
win.flip()
ISI = StaticPeriod(screenHz=60)
ISI.start(0.5) # start a period of 0.5s
stim.image = 'largeFile.bmp' # could take some time
ISI.complete() # finish the 0.5s, taking into account one 60Hz frame
stim.draw()
win.flip() # the period takes into account the next frame flip
# time should now be at exactly 0.5s later than when ISI.start()
# was called
Parameters
• screenHz – the frame rate of the monitor (leave as None if you don’t want this accounted
for)
• win – if a visual.Window is given then StaticPeriod will also pause/restart frame interval
recording
• name – give this StaticPeriod a name for more informative logging messages
complete()
Completes the period, using up whatever time is remaining with a call to wait()
Returns 1 for success, 0 for fail (the period overran)
start(duration)
Start the period. If this is called a second time, the timer will be reset and starts again
Parameters duration – The duration of the period, in seconds.
8.4.1 ExperimentHandler
_getAllParamNames()
Returns the attribute names of loop parameters (trialN etc) that the current set of loops contain, ready to
build a wide-format data file.
_getExtraInfo()
Get the names and vals from the extraInfo dict (if it exists)
_getLoopInfo(loop)
Returns the attribute names and values for the current trial of a particular loop. Does not return data inputs
from the subject, only info relating to the trial execution.
abort()
Inform the ExperimentHandler that the run was aborted.
Experiment handler will attempt automatically to save data (even in the event of a crash if possible). So if
you quit your script early you may want to tell the Handler not to save out the data files for this run. This
is the method that allows you to do that.
addData(name, value)
Add the data with a given name to the current experiment.
Typically the user does not need to use this function; if you added your data to the loop and had already
added the loop to the experiment then the loop will automatically inform the experiment that it has received
data.
Multiple data name/value pairs can be added to any given entry of the data file and is considered part of
the same entry until the nextEntry() call is made.
e.g.:
addLoop(loopHandler)
Add a loop such as a TrialHandler or StairHandler Data from this loop will be included in the
resulting data files.
close()
getAllEntries()
Fetches a copy of all the entries including a final (orphan) entry if that exists. This allows entries to be
saved even if nextEntry() is not yet called.
Returns copy (not pointer) to entries
loopEnded(loopHandler)
Informs the experiment handler that the loop is finished and not to include its values in further entries of
the experiment.
This method is called by the loop itself if it ends its iterations, so is not typically needed by the user.
nextEntry()
Calling nextEntry indicates to the ExperimentHandler that the current trial has ended and so further ad-
dData() calls correspond to the next trial.
saveAsPickle(fileName, fileCollisionMethod=’rename’)
Basically just saves a copy of self (with data) to a pickle file.
This can be reloaded if necessary and further analyses carried out.
8.4.2 TrialHandler
Then you’ll find that dat has the following attributes that
Parameters
trialList: a simple list (or flat array) of dictionaries specifying conditions. This can be im-
ported from an excel/csv file using importConditions()
nReps: number of repeats for all conditions
method: ‘random’, ‘sequential’, or ‘fullRandom’ ‘sequential’ obviously presents the condi-
tions in the order they appear in the list. ‘random’ will result in a shuffle of the conditions
on each repeat, but all conditions occur once before the second repeat etc. ‘fullRandom’
fully randomises the trials across repeats as well, which means you could potentially run all
trials of one condition before any trial of another.
dataTypes: (optional) list of names for data storage. e.g. [‘corr’,’rt’,’resp’]. If not provided
then these will be created as needed during calls to addData()
extraInfo: A dictionary This will be stored alongside the data and usually describes the exper-
iment and subject ID, date etc.
seed: an integer If provided then this fixes the random number generator to use the same pat-
tern of trials, by seeding its startpoint
originPath: a string describing the location of the script / experiment file path. The psydat
file format will store a copy of the experiment if possible. If originPath==None is provided
here then the TrialHandler will still store a copy of the script where it was created. If
OriginPath==-1 then nothing will be stored.
Attributes (after creation)
.data - a dictionary (or more strictly, a DataHandler sub- class of a dictionary) of numpy ar-
rays, one for each data type stored
.trialList - the original list of dicts, specifying the conditions
.thisIndex - the index of the current trial in the original conditions list
.nTotal - the total number of trials that will be run
.nRemaining - the total number of trials remaining
.thisN - total trials completed so far
.thisRepN - which repeat you are currently on
.thisTrialN - which trial number within that repeat
.thisTrial - a dictionary giving the parameters of the current trial
.finished - True/False for have we finished yet
.extraInfo - the dictionary of extra info as given at beginning
.origin - the contents of the script or builder experiment that created the handler
_createOutputArray(stimOut, dataOut, delim=None, matrixOnly=False)
Does the leg-work for saveAsText and saveAsExcel. Combines stimOut with ._parseDataOutput()
_createOutputArrayData(dataOut)
This just creates the dataOut part of the output matrix. It is called by _createOutputArray() which creates
the header line and adds the stimOut columns
_createSequence()
Pre-generates the sequence of trial presentations (for non-adaptive methods). This is called automatically
when the TrialHandler is initialised so doesn’t need an explicit call from the user.
The returned sequence has form indices[stimN][repN] Example: sequential with 6 trialtypes (rows), 5 reps
(cols), returns:
[[0 0 0 0 0] [1 1 1 1 1] [2 2 2 2 2] [3 3 3 3 3] [4 4 4 4 4] [5 5 5 5 5]]
To add a new type of sequence (as of v1.65.02): - add the sequence generation code here - adjust “if
self.method in [ . . . ]:” in both __init__ and .next() - adjust allowedVals in experiment.py -> shows up in
DlgLoopProperties Note that users can make any sequence whatsoever outside of PsychoPy, and specify
sequential order; any order is possible this way.
_makeIndices(inputArray)
Creates an array of tuples the same shape as the input array where each tuple contains the indices to itself
in the array.
Useful for shuffling and then using as a reference.
_terminate()
Remove references to ourself in experiments and terminate the loop
addData(thisType, value, position=None)
Add data for the current trial
getEarlierTrial(n=-1)
Returns the condition information from n trials previously. Useful for comparisons in n-back tasks. Returns
‘None’ if trying to access a trial prior to the first.
getExp()
Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached
getFutureTrial(n=1)
Returns the condition for n trials into the future, without advancing the trials. A negative n returns a
previous (past) trial. Returns ‘None’ if attempting to go beyond the last trial.
getOriginPathAndFile(originPath=None)
Attempts to determine the path of the script that created this data file and returns both the path to that script
and its contents. Useful to store the entire experiment with the data.
If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath
(fine from a standard python script).
next()
Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex If the trials
have ended this method will raise a StopIteration error. This can be handled with code such as:
trials = data.TrialHandler(.......)
for eachTrial in trials: # automatically stops when done
# do stuff
or:
trials = data.TrialHandler(.......)
while True: # ie forever
try:
thisTrial = trials.next()
except StopIteration: # we got a StopIteration error
break #break out of the forever loop
# do stuff here for the trial
and then have one worksheet for each participant. Or you could have one file for each participant and then
multiple sheets for repeated sessions etc.
The file extension .xlsx will be added if not given already.
Parameters
fileName: string the name of the file to create or append. Can include relative or absolute
path
sheetName: string the name of the worksheet within the file
stimOut: list of strings the attributes of the trial characteristics to be output. To use this
you need to have provided a list of dictionaries specifying to trialList parameter of the
TrialHandler and give here the names of strings specifying entries in that dictionary
dataOut: list of strings specifying the dataType and the analysis to be performed, in the
form dataType_analysis. The data can be any of the types that you added using trialHan-
dler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library,
including ‘mean’,’std’,’median’,’max’,’min’. e.g. rt_max will give a column of max reac-
tion times across the trials assuming that rt values have been stored. The default values
will output the raw, mean and std of all datatypes found.
appendFile: True or False If False any existing file with this name will be overwritten. If
True then a new worksheet will be appended. If a worksheet already exists with that name
a number will be added to make it unique.
fileCollisionMethod: string Collision method passed to handleFileCollision()
This is ignored if append is True.
saveAsJson(fileName=None, encoding=’utf-8’, fileCollisionMethod=’rename’)
Serialize the object to the JSON format.
Parameters
• fileName (string, or None) – the name of the file to create or append. Can in-
clude a relative or absolute path. If None, will not write to a file, but return an in-memory
JSON object.
• encoding (string, optional) – The encoding to use when writing the file.
• fileCollisionMethod (string) – Collision method passed to
handleFileCollision(). Can be either of ‘rename’, ‘overwrite’, or ‘fail’.
Notes
Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before
serializing because loading the created JSON file would sometimes fail otherwise.
saveAsPickle(fileName, fileCollisionMethod=’rename’)
Basically just saves a copy of the handler (with data) to a pickle file.
This can be reloaded if necessary and further analyses carried out.
Parameters fileCollisionMethod: Collision method passed to handleFileCollision()
saveAsText(fileName, stimOut=None, dataOut=(’n’, ’all_mean’, ’all_std’, ’all_raw’), delim=None,
matrixOnly=False, appendFile=True, summarised=True, fileCollisionMethod=’rename’,
encoding=’utf-8-sig’)
Write a text file with the data and various chosen stimulus attributes
Parameters
fileName: will have .tsv appended and can include path info.
stimOut: the stimulus attributes to be output. To use this you need to use a list of dictionaries and give
here the names of dictionary keys that you want as strings
dataOut: a list of strings specifying the dataType and the analysis to be performed,in the form
dataType_analysis. The data can be any of the types that you added using trialHan-
dler.data.add() and the analysis can be either ‘raw’ or most things in the numpy library, including;
‘mean’,’std’,’median’,’max’,’min’. . . The default values will output the raw, mean and std of all
datatypes found
delim: allows the user to use a delimiter other than tab (“,” is popular with file extension “.csv”)
matrixOnly: outputs the data with no header row or extraInfo attached
appendFile: will add this output to the end of the specified file if it already exists
fileCollisionMethod: Collision method passed to handleFileCollision()
encoding: The encoding to use when saving a the file. Defaults to utf-8-sig.
Parameters
fileName: if extension is not specified, ‘.csv’ will be appended if the delimiter is ‘,’, else
‘.tsv’ will be appended. Can include path info.
delim: allows the user to use a delimiter other than the default tab (“,” is popular with file
extension “.csv”)
matrixOnly: outputs the data with no header row.
appendFile: will add this output to the end of the specified file if it already exists.
fileCollisionMethod: Collision method passed to handleFileCollision()
encoding: The encoding to use when saving a the file. Defaults to utf-8-sig.
setExp(exp)
Sets the ExperimentHandler that this handler is attached to
Do NOT attempt to set the experiment using:
trials._exp = myExperiment
8.4.3 StairHandler
Notes
The additional keyword arguments **kwargs might for example be passed by the MultiStairHandler, which
expects a label keyword for each staircase. These parameters are to be ignored by the StairHandler.
_intensityDec()
decrement the current intensity and reset counter
_intensityInc()
increment the current intensity and reset counter
_terminate()
Remove references to ourself in experiments and terminate the loop
addData(result, intensity=None)
Deprecated since 1.79.00: This function name was ambiguous. Please use one of these instead:
.addResponse(result, intensity) .addOtherData(‘dataName’, value’)
addOtherData(dataName, value)
Add additional data to the handler, to be tracked alongside the result data but not affecting the value of the
staircase
addResponse(result, intensity=None)
Add a 1 or 0 to signify a correct / detected or incorrect / missed trial
This is essential to advance the staircase to a new intensity level!
Supplying an intensity value here indicates that you did not use the recommended intensity in your last
trial and the staircase will replace its recorded value with the one you supplied here.
calculateNextIntensity()
Based on current intensity, counter of correct responses, and current direction.
getExp()
Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached
getOriginPathAndFile(originPath=None)
Attempts to determine the path of the script that created this data file and returns both the path to that script
and its contents. Useful to store the entire experiment with the data.
If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath
(fine from a standard python script).
next()
Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex.
If the trials have ended, calling this method will raise a StopIteration error. This can be handled with code
such as:
staircase = data.StairHandler(.......)
for eachTrial in staircase: # automatically stops when done
# do stuff
or:
staircase = data.StairHandler(.......)
while True: # ie forever
try:
thisTrial = staircase.next()
except StopIteration: # we got a StopIteration error
break # break out of the forever loop
# do stuff here for the trial
Notes
Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before
serializing because loading the created JSON file would sometimes fail otherwise.
saveAsPickle(fileName, fileCollisionMethod=’rename’)
Basically just saves a copy of self (with data) to a pickle file.
This can be reloaded if necess and further analyses carried out.
trials._exp = myExperiment
8.4.4 MultiStairHandler
Example usage:
conditions=[
{'label':'low', 'startVal': 0.1, 'ori':45},
{'label':'high','startVal': 0.8, 'ori':45},
{'label':'low', 'startVal': 0.1, 'ori':90},
{'label':'high','startVal': 0.8, 'ori':90},
]
stairs = data.MultiStairHandler(conditions=conditions, nTrials=50)
_startNewPass()
Create a new iteration of the running staircases for this pass.
This is not normally needed by the user - it gets called at __init__ and every time that next() runs out of
trials for this pass.
_terminate()
Remove references to ourself in experiments and terminate the loop
addData(result, intensity=None)
Deprecated 1.79.00: It was ambiguous whether you were adding the response (0 or 1) or some other data
concerning the trial so there is now a pair of explicit methods:
addResponse(corr,intensity) #some data that alters the next trial value
addOtherData(‘RT’, reactionTime) #some other data that won’t control staircase
addOtherData(name, value)
Add some data about the current trial that will not be used to control the staircase(s) such as reaction time
data
addResponse(result, intensity=None)
Add a 1 or 0 to signify a correct / detected or incorrect / missed trial
This is essential to advance the staircase to a new intensity level!
getExp()
Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached
getOriginPathAndFile(originPath=None)
Attempts to determine the path of the script that created this data file and returns both the path to that script
and its contents. Useful to store the entire experiment with the data.
If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath
(fine from a standard python script).
next()
Advances to next trial and returns it.
This can be handled with code such as:
staircase = data.MultiStairHandler(.......)
for eachTrial in staircase: # automatically stops when done
# do stuff here for the trial
or:
staircase = data.MultiStairHandler(.......)
while True: # ie forever
try:
thisTrial = staircase.next()
except StopIteration: # we got a StopIteration error
break # break out of the forever loop
# do stuff here for the trial
printAsText(delim=’\t’, matrixOnly=False)
Write the data to the standard output stream
Parameters
delim: a string the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)
matrixOnly: True/False If True, prevents the output of the extraInfo provided at initialisa-
tion.
saveAsExcel(fileName, matrixOnly=False, appendFile=False, fileCollisionMethod=’rename’)
Save a summary data file in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet
packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOffice
(>=3.0).
It has the advantage over the simpler text files (see TrialHandler.saveAsText() ) that the data
from each staircase will be save in the same file, with the sheet name coming from the ‘label’ given in the
dictionary of conditions during initialisation of the Handler.
The file extension .xlsx will be added if not given already.
The file will contain a set of values specifying the staircase level (‘intensity’) at each reversal, a list of
reversal indices (trial numbers), the raw staircase/intensity level on every trial and the corresponding re-
sponses of the participant on every trial.
Parameters
fileName: string the name of the file to create or append. Can include relative or absolute
path
matrixOnly: True or False If set to True then only the data itself will be output (no addi-
tional info)
appendFile: True or False If False any existing file with this name will be overwritten. If
True then a new worksheet will be appended. If a worksheet already exists with that name
a number will be added to make it unique.
fileCollisionMethod: string Collision method passed to handleFileCollision()
This is ignored if append is True.
saveAsJson(fileName=None, encoding=’utf-8-sig’, fileCollisionMethod=’rename’)
Serialize the object to the JSON format.
Parameters
• fileName (string, or None) – the name of the file to create or append. Can in-
clude a relative or absolute path. If None, will not write to a file, but return an in-memory
JSON object.
• encoding (string, optional) – The encoding to use when writing the file.
• fileCollisionMethod (string) – Collision method passed to
handleFileCollision(). Can be either of ‘rename’, ‘overwrite’, or ‘fail’.
Notes
Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before
serializing because loading the created JSON file would sometimes fail otherwise.
saveAsPickle(fileName, fileCollisionMethod=’rename’)
Saves a copy of self (with data) to a pickle file.
This can be reloaded later and further analyses carried out.
Parameters fileCollisionMethod: Collision method passed to handleFileCollision()
saveAsText(fileName, delim=None, matrixOnly=False, fileCollisionMethod=’rename’,
encoding=’utf-8-sig’)
Write out text files with the data.
For MultiStairHandler this will output one file for each staircase that was run, with _label added to the
fileName that you specify above (label comes from the condition dictionary you specified when you created
the Handler).
Parameters
fileName: a string The name of the file, including path if needed. The extension .tsv will
be added if not included.
delim: a string the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)
matrixOnly: True/False If True, prevents the output of the extraInfo provided at initialisa-
tion.
fileCollisionMethod: Collision method passed to handleFileCollision()
encoding: The encoding to use when saving a the file. Defaults to utf-8-sig.
setExp(exp)
Sets the ExperimentHandler that this handler is attached to
Do NOT attempt to set the experiment using:
trials._exp = myExperiment
8.4.5 QuestHandler
Threshold ‘t’ is measured on an abstract ‘intensity’ scale, which usually corresponds to log10 contrast.
The Weibull psychometric function:
_e = -10**(beta * (x2 + xThreshold)) p2 = delta * gamma + (1-delta) * (1 - (1 - gamma) * exp(_e))
Example:
# setup display/window
...
# create stimulus
stimulus = visual.RadialStim(win=win, tex='sinXsin', size=1,
pos=[0,0], units='deg')
...
# create staircase object
# trying to find out the point where subject's response is 50 / 50
# if wanted to do a 2AFC then the defaults for pThreshold and gamma
# are good. As start value, we'll use 50% contrast, with SD = 20%
staircase = data.QuestHandler(0.5, 0.2,
pThreshold=0.63, gamma=0.01,
nTrials=20, minVal=0, maxVal=1)
...
while thisContrast in staircase:
# setup stimulus
stimulus.setContrast(thisContrast)
stimulus.draw()
win.flip()
core.wait(0.5)
# get response
...
# inform QUEST of the response, needed to calculate next level
staircase.addResponse(thisResp)
...
# can now access 1 of 3 suggested threshold levels
staircase.mean()
staircase.mode()
staircase.quantile(0.5) # gets the median
stopInterval: None or a number The minimum 5-95% confidence interval required in the
threshold estimate before stopping. If both this and nTrials is specified, whichever happens
first will determine when Quest will stop.
method: ‘quantile’, ‘mean’, ‘mode’ The method used to determine the next threshold to test.
If you want to get a specific threshold level at the end of your staircasing, please use the
quantile, mean, and mode methods directly.
beta: 3.5 or a number Controls the steepness of the psychometric function.
delta: 0.01 or a number The fraction of trials on which the observer presses blindly.
gamma: 0.5 or a number The fraction of trials that will generate response 1 when intensity=-
Inf.
grain: 0.01 or a number The quantization of the internal table.
range: None, or a number The intensity difference between the largest and smallest intensity
that the internal table can store. This interval will be centered on the initial guess tGuess.
QUEST assumes that intensities outside of this range have zero prior probability (i.e., they
are impossible).
extraInfo: A dictionary (typically) that will be stored along with collected data using
saveAsPickle() or saveAsText() methods.
minVal: None, or a number The smallest legal value for the staircase, which can be used to
prevent it reaching impossible contrast values, for instance.
maxVal: None, or a number The largest legal value for the staircase, which can be used to
prevent it reaching impossible contrast values, for instance.
staircase: None or StairHandler Can supply a staircase object with intensities and results.
Might be useful to give the quest algorithm more information if you have it. You can also
call the importData function directly.
Additional keyword arguments will be ignored.
Notes
The additional keyword arguments **kwargs might for example be passed by the MultiStairHandler, which
expects a label keyword for each staircase. These parameters are to be ignored by the StairHandler.
_checkFinished()
checks if we are finished Updates attribute: finished
_intensity()
assigns the next intensity level
_intensityDec()
decrement the current intensity and reset counter
_intensityInc()
increment the current intensity and reset counter
_terminate()
Remove references to ourself in experiments and terminate the loop
addData(result, intensity=None)
Deprecated since 1.79.00: This function name was ambiguous. Please use one of these instead:
.addResponse(result, intensity) .addOtherData(‘dataName’, value’)
addOtherData(dataName, value)
Add additional data to the handler, to be tracked alongside the result data but not affecting the value of the
staircase
addResponse(result, intensity=None)
Add a 1 or 0 to signify a correct / detected or incorrect / missed trial
Supplying an intensity value here indicates that you did not use the recommended intensity in your last
trial and the staircase will replace its recorded value with the one you supplied here.
calculateNextIntensity()
based on current intensity and counter of correct responses
confInterval(getDifference=False)
Return estimate for the 5%–95% confidence interval (CI).
Parameters
getDifference (bool) If True, return the width of the confidence interval (95% - 5% per-
centiles). If False, return an NumPy array with estimates for the 5% and 95% bound-
aries.
Returns scalar or array of length 2.
getExp()
Return the ExperimentHandler that this handler is attached to, if any. Returns None if not attached
getOriginPathAndFile(originPath=None)
Attempts to determine the path of the script that created this data file and returns both the path to that script
and its contents. Useful to store the entire experiment with the data.
If originPath is provided (e.g. from Builder) then this is used otherwise the calling script is the originPath
(fine from a standard python script).
importData(intensities, results)
import some data which wasn’t previously given to the quest algorithm
incTrials(nNewTrials)
increase maximum number of trials Updates attribute: nTrials
mean()
mean of Quest posterior pdf
mode()
mode of Quest posterior pdf
next()
Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN, thisIndex, finished, intensities
If the trials have ended, calling this method will raise a StopIteration error. This can be handled with code
such as:
staircase = data.QuestHandler(.......)
for eachTrial in staircase: # automatically stops when done
# do stuff
or:
staircase = data.QuestHandler(.......)
while True: # i.e. forever
try:
thisTrial = staircase.next()
(continues on next page)
Notes
Currently, a copy of the object is created, and the copy’s .origin attribute is set to an empty string before
serializing because loading the created JSON file would sometimes fail otherwise.
saveAsPickle(fileName, fileCollisionMethod=’rename’)
Basically just saves a copy of self (with data) to a pickle file.
This can be reloaded if necess and further analyses carried out.
Parameters fileCollisionMethod: Collision method passed to handleFileCollision()
saveAsText(fileName, delim=None, matrixOnly=False, fileCollisionMethod=’rename’,
encoding=’utf-8-sig’)
Write a text file with the data
Parameters
fileName: a string The name of the file, including path if needed. The extension .tsv will
be added if not included.
delim: a string the delimitter to be used (e.g. ‘ ‘ for tab-delimitted, ‘,’ for csv files)
matrixOnly: True/False If True, prevents the output of the extraInfo provided at initialisa-
tion.
fileCollisionMethod: Collision method passed to handleFileCollision()
encoding: The encoding to use when saving a the file. Defaults to utf-8-sig.
sd()
standard deviation of Quest posterior pdf
setExp(exp)
Sets the ExperimentHandler that this handler is attached to
Do NOT attempt to set the experiment using:
trials._exp = myExperiment
8.4.6 FitWeibull
x = alpha * (-log((1.0-y)/(1-chance)))**(1.0/beta)
After fitting the function you can evaluate an array of x-values with fit.eval(x), retrieve the inverse of
the function with fit.inverse(y) or retrieve the parameters from fit.params (a list with [alpha,
beta])
_doFit()
The Fit class that derives this needs to specify its _evalFunction
eval(xx, params=None)
Evaluate xx for the current parameters of the model, or for arbitrary params if these are given.
inverse(yy, params=None)
Evaluate yy for the current parameters of the model, or for arbitrary params if these are given.
8.4.7 FitLogistic
y = chance + (1-chance)/(1+exp((PSE-xx)*JND))
After fitting the function you can evaluate an array of x-values with fit.eval(x), retrieve the inverse of the
function with fit.inverse(y) or retrieve the parameters from fit.params (a list with [PSE, JND])
_doFit()
The Fit class that derives this needs to specify its _evalFunction
eval(xx, params=None)
Evaluate xx for the current parameters of the model, or for arbitrary params if these are given.
inverse(yy, params=None)
Evaluate yy for the current parameters of the model, or for arbitrary params if these are given.
8.4.8 FitNakaRushton
After fitting the function you can evaluate an array of x-values with fit.eval(x), retrieve the inverse of the
function with fit.inverse(y) or retrieve the parameters from fit.params (a list with [rMin, rMax,
c50, n])
Note that this differs from most of the other functions in not using a value for the expected minimum. Rather, it
fits this as one of the parameters of the model.
_doFit()
The Fit class that derives this needs to specify its _evalFunction
eval(xx, params=None)
Evaluate xx for the current parameters of the model, or for arbitrary params if these are given.
inverse(yy, params=None)
Evaluate yy for the current parameters of the model, or for arbitrary params if these are given.
8.4.9 FitCumNormal
y = chance + (1-chance)*((special.erf((xx-xShift)/(sqrt(2)*sd))+1)*0.5)
x = xShift+sqrt(2)*sd*(erfinv(((yy-chance)/(1-chance)-.5)*2))
After fitting the function you can evaluate an array of x-values with fit.eval(x), retrieve the inverse of the function
with fit.inverse(y) or retrieve the parameters from fit.params (a list with [centre, sd] for the Gaussian distribution
forming the cumulative)
NB: Prior to version 1.74 the parameters had different meaning, relating to xShift and slope of the function
(similar to 1/sd). Although that is more in with the parameters for the Weibull fit, for instance, it is less in
keeping with standard expectations of normal (Gaussian distributions) so in version 1.74.00 the parameters
became the [centre,sd] of the normal distribution.
_doFit()
The Fit class that derives this needs to specify its _evalFunction
eval(xx, params=None)
Evaluate xx for the current parameters of the model, or for arbitrary params if these are given.
inverse(yy, params=None)
Evaluate yy for the current parameters of the model, or for arbitrary params if these are given.
8.4.10 importConditions()
8.4.11 functionFromStaircase()
where:
intensities are a list (or array) of intensities to be binned
responses are a list of 0,1 each corresponding to the equivalent intensity value
bins can be an integer (giving that number of bins) or ‘unique’ (each bin is made from aa data for exactly
one intensity value)
intensity a numpy array of intensity values (where each is the center of an intensity bin)
meanCorrect a numpy array of mean % correct in each bin
n a numpy array of number of responses contributing to each mean
8.4.12 bootStraps()
psychopy.data.bootStraps(dat, n=1)
Create a list of n bootstrapped resamples of the data
SLOW IMPLEMENTATION (Python for-loop)
Usage: out = bootStraps(dat, n=1)
Where:
dat an NxM or 1xN array (each row is a different condition, each column is a different trial)
n number of bootstrapped resamples to create
out
• dim[0]=conditions
• dim[1]=trials
• dim[2]=resamples
8.5 Encryption
Some labs may wish to better protect their data from casual inspection or accidental disclosure. This is possible within
PsychoPy using a separate python package, pyFileSec, which grew out of PsychoPy. pyFileSec is distributed with
the StandAlone versions of PsychoPy, or can be installed using pip or easy_install via https://pypi.python.org/pypi/
PyFileSec/
Some elaboration of pyFileSec usage and security strategy can be found here: http://pythonhosted.org/PyFileSec
Basic usage is illustrated in the Coder demo > misc > encrypt_data.py
buttons = mouse.getPressed()
buttons, times = mouse.getPressed(getTime=True)
Typically you want to call mouse.clickReset() at stimulus onset, then after the button is pressed in reaction
to it, the total time elapsed from the last reset to click is in mouseTimes. This is the actual RT, regardless
of when the call to getPressed() was made.
getRel()
Returns the new position of the mouse relative to the last call to getRel or getPos, in the same units as the
Window.
getVisible()
Gets the visibility of the mouse (1 or 0)
getWheelRel()
Returns the travel of the mouse scroll wheel since last call. Returns a numpy.array(x,y) but for most wheels
y is the only value that will change (except Mac mighty mice?)
if mouse.isPressedIn(shape):
if mouse.isPressedIn(shape, buttons=[0]): # left-clicks only
Ideally, shape can be anything that has a .contains() method, like ShapeStim or Polygon. Not tested with
ImageStim.
mouseMoveTime()
mouseMoved(distance=None, reset=False)
Determine whether/how far the mouse has moved.
With no args returns true if mouse has moved at all since last getPos() call, or distance (x,y) can be set to
pos or neg distances from x and y to see if moved either x or y that far from lastPos, or distance can be an
int/float to test if new coordinates are more than that far in a straight line from old coords.
Retrieve time of last movement from self.mouseClock.getTime().
Reset can be to ‘here’ or to screen coords (x,y) which allows measuring distance from there to mouse
when moved. If reset is (x,y) and distance is set, then prevPos is set to (x,y) and distance from (x,y) to here
is checked, mouse.lastPos is set as current (x,y) by getPos(), mouse.prevPos holds lastPos from last time
mouseMoved was called.
setExclusive(exclusivity)
Binds the mouse to the experiment window. Only works in Pyglet.
In multi-monitor settings, or with a window that is not fullscreen, the mouse pointer can drift, and thereby
PsychoPy might not get the events from that window. setExclusive(True) works with Pyglet to bind the
mouse to the experiment window.
Note that binding the mouse pointer to a window will cause the pointer to vanish, and absolute positions
will no longer be meaningful getPos() returns [0, 0] in this case.
setPos(newPos=(0, 0))
Sets the current position of the mouse, in the same units as the Window. (0,0) is the center.
Parameters
newPos [(x,y) or [x,y]] the new position on the screen
setVisible(visible)
Sets the visibility of the mouse to 1 or 0
NB when the mouse is not visible its absolute position is held at (0, 0) to prevent it from going off the
screen and getting lost! You can still use getRel() in that case.
units
The units for this mouse (will match the current units for the Window it lives in)
psychopy.event.clearEvents(eventType=None)
Clears all events currently in the event buffer.
Optional argument, eventType, specifies only certain types to be cleared.
Parameters
eventType [None, ‘mouse’, ‘joystick’, ‘keyboard’] If this is not None then only events of the
given type are cleared
This module has moved to psychopy.visual.filters but you can still (currently) import it as psychopy.filters
8.8.1 DlgFromDict
:param :::
info = {‘Observer’:’jwp’, ‘GratingOri’:45, ‘ExpVersion’: 1.1, ‘Group’: [‘Test’, ‘Control’]}
infoDlg = gui.DlgFromDict(dictionary=info, title=’TestExperiment’, fixed=[‘ExpVersion’])
if infoDlg.OK: print(info)
else: print(‘User Cancelled’)
Parameters
• the code above, the contents of info will be updated to the
values (In) –
show()
Display the dialog.
8.8.2 Dlg
8.8.3 fileOpenDlg()
8.8.4 fileSaveDlg()
PsychoPy can access a wide range of external hardware. For some devices the interface has already been created in the
following sub-packages of PsychoPy. For others you may need to write the code to access the serial port etc. manually.
Contents:
The pyxid package, written by Cedrus, is included in the Standalone PsychoPy distributions. See https://github.com/
cedrus-opensource/pyxid for further info.
Example usage:
import pyxid
while True:
dev.poll_for_response()
if dev.response_queue_size() > 0:
response = dev.get_next_response()
# do something with the response
Useful functions
Device classes
BitsPlusPlus
Control a CRS Bits# device. See typical usage in the class summary (and in the menu demos>hardware>BitsBox of
PsychoPy’s Coder view).
Important: See note on BitsPlusPlusIdentityLUT
Attributes
Details
Parameters
contrast : The contrast to be applied to the LUT. See BitsPlusPlus.setLUT() and
BitsPlusPlus.setContrast() for flexibility on setting just a section of the LUT
to a different value
gamma : The value used to correct the gamma in the LUT
nEntries [256] [DEPRECATED feature]
mode [‘bits++’ (or ‘mono++’ or ‘color++’)] Note that, unlike the Bits#, this only affects the
way the window is rendered, it does not switch the state of the Bits++ device itself (because
unlike the Bits# have no way to communicate with it). The mono++ and color++ are only
supported in PsychoPy 1.82.00 onwards. Even then they suffer from not having gamma
correction applied on Bits++ (unlike Bits# which can apply a gamma table in the device
hardware).
rampType [‘configFile’, None or an integer] if ‘configFile’ then we’ll look for a
valid config in the userPrefs folder if an integer then this will be used during
win.setGamma(rampType=rampType):
frameRate : an estimate the frameRate of the monitor. If None frame rate will be calculated.
_Goggles()
(private) Used to set control the goggles. Should not be needed by user if attached to a psychopy.
visual.Window()
_ResetClock()
(private) Used to reset Bits hardware clock. Should not be needed by user if attached to a psychopy.
visual.Window() since this will automatically draw the reset code as part of the screen refresh.
_drawLUTtoScreen()
(private) Used to set the LUT in ‘bits++’ mode. Should not be needed by user if attached to a psychopy.
visual.Window() since this will automatically draw the LUT as part of the screen refresh.
_drawTrigtoScreen(sendStr=None)
(private) Used to send a trigger pulse. Should not be needed by user if attached to a psychopy.visual.
Window() since this will automatically draw the trigger code as part of the screen refresh.
_protectTrigger()
If Goggles (or analog) outputs are used when the digital triggers are off we need to make a set of blank
triggers first. But the user might have set up triggers in waiting for a later time. So this will protect them.
_restoreTrigger()
Restores the triggers to previous settings
_setHeaders(frameRate)
Sets up the TLock header codes and some flags that are common to operating all CRS devices
_setupShaders()
creates and stores the shader programs needed for mono++ and color++ modes
getPackets()
Returns the number of packets available for trigger pulses.
primeClock()
Primes the clock to reset at the next screen flip - note only 1 clock reset signal will be issued but if the
frame(s) after the reset frame is dropped the reset will be re-issued thus keeping timing good.
Resets continute to be issued on each video frame until the next win.flip so you need to have regular
win.flips for this function to work properly.
Example bits.primeClock() drawImage while not response
#do some processing bits.win.flip()
Will get a clock reset signal ready but wont issue it until the first win.flip in the loop.
reset()
Deprecated: This was used on the old Bits++ to power-cycle the box. It required the compiled dll, which
only worked on windows and doesn’t work with Bits# or Display++.
resetClock()
Issues a clock reset code using 1 screen flip if the next frame(s) is dropped the reset will be re-issued thus
keeping timing good.
Resets continute to be issued on each video frame until the next win.flip so you need to have regular
win.flips for this function to work properly.
Example
Example
Example
Examples
setContrast(1.0,0.5) will set the central 50% of the LUT so that a stimulus with contr=0.5 will
actually be drawn with contrast 1.0
setContrast(1.0,[0.25,0.5]) setContrast(1.0,[63,127])
will set the lower-middle quarter of the LUT (which might be useful in LUT animation
paradigms)
setGamma(newGamma)
Set the LUT to have the requested gamma value Currently also resets the LUT to be a linear contrast ramp
spanning its full range. May change this to read the current LUT, undo previous gamma and then apply
new one?
setLUT(newLUT=None, gammaCorrect=True, LUTrange=1.0)
Sets the LUT to a specific range of values in ‘bits++’ mode only Note that, if you leave gammaCor-
rect=True then any LUT values you supply will automatically be gamma corrected. The LUT will take
effect on the next Window.flip() Examples:
bitsBox.setLUT() builds a LUT using bitsBox.contrast and bitsBox.gamma
bitsBox.setLUT(newLUT=some256x1array) (NB array should be float 0.0:1.0)
Builds a luminance LUT using newLUT for each gun (actually array can be 256x1 or 1x256)
bitsBox.setLUT(newLUT=some256x3array) (NB array should be float 0.0:1.0) Al-
lows you to use a different LUT on each gun
(NB by using BitsBox.setContr() and BitsBox.setGamma() users may not need this function)
setTrigger(triggers=0, onTime=0, duration=0, mask=65535)
Quick way to set up triggers.
Triggers is a binary word that determines which triggers will be turned on.
onTime specifies the start time of the trigger within the frame (in S with 100uS resolution)
Duration specifies how long the trigger will last. (in S with 100uS resolution).
Note that mask only protects the digital output lines set by other activities in the Bits. Not other triggers.
Example
Example
Example
Example
Example
Example
Example
For the Bits++ (and related) devices to work correctly it is essential that the graphics card is not altering
in any way the values being passed to the monitor (e.g. by gamma correcting). It turns out that finding
the ‘identity’ LUT, where exactly the same values come out as were put in, is not trivial. The obvious
LUT would have something like 0/255, 1/255, 2/255. . . in entry locations 0,1,2. . . but unfortunately most
graphics cards on most operating systems are ‘broken’ in one way or another, with rounding errors and
incorrect start points etc.
PsychoPy provides a few of the common variants of LUT and that can be chosen when you initialise the
device using the parameter rampType. If no rampType is specified then PsychoPy will choose one for you:
The Bits# is capable of reporting back the pixels in a line and this can be used to test that a particular LUT
is indeed providing identity values. If you have previously connected a BitsSharp device and used it
with PsychoPy then a file will have been stored with a LUT that has been tested with that device. In this
case set rampType = “configFile” for PsychoPy to use it if such a file is found.
BitsSharp
Control a CRS Bits# device. See typical usage in the class summary (and in the menu demos>hardware>BitsBox of
PsychoPy’s Coder view).
Attributes
BitsSharp([win, portName, mode, . . . ]) A class to support functions of the Bits# (and most Dis-
play++ functions This device uses the CDC (serial port)
connection to the Bits box.
BitsSharp.mode Get/set the mode of the BitsSharp to one of – “bits++”
“mono++” “color++” “status” “storage” “auto”
BitsSharp.isAwake() Test whether we have an active connection on the virtual
serial port
BitsSharp.getInfo() Returns a python dictionary of info about the Bits Sharp
box
BitsSharp.checkConfig([level, demoMode, Checks whether there is a configuration for this device
logFile]) and whether it’s correct :params: level: integer 0: do
nothing 1: check that we have a config file and that the
graphics card and operating system match that specified
in the file.
BitsSharp.gammaCorrectFile Get / set the gamma correction file to be used (as stored
on the device)
BitsSharp.temporalDithering Temporal dithering can be set to True or False
BitsSharp.monitorEDID Get / set the EDID file for the monitor.
BitsSharp.beep([freq, dur]) Make a beep of a given frequency and duration
Continued on next page
BitsSharp.sendMessage(message[, autoLog]) Send a command to the device (does not wait for a reply
or sleep())
BitsSharp.getResponse([length, timeout]) Read the latest response from the serial port
BitsSharp.setContrast(contrast[, LUTrange, Set the contrast of the LUT for ‘bits++’ mode only :Pa-
. . . ]) rameters: contrast : float in the range 0:1 The contrast
for the range being set LUTrange : float or array If a
float is given then this is the fraction of the LUT to be
used.
BitsSharp.setGamma(newGamma) Set the LUT to have the requested gamma value Cur-
rently also resets the LUT to be a linear contrast ramp
spanning its full range.
BitsSharp.setLUT([newLUT, gammaCorrect, SetLUT is only really needed for bits++ mode of bits#
. . . ]) to set the look-up table (256 values with 14bits each).
Details
Note that the firmware in Bits# boxes varies over time and some features of this class may not work for all
firmware versions. Also Bits# boxes can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. In particular it is assumed that all digital inputs, triggers and
analog inputs are reported as part of status updates. If some of these report are disabled in your config.xml file
then ‘status’ and ‘event’ commands in this class may not work.
RTBox commands that reset the key mapping have been found not to work one some firmware.
Parameters win : a PsychoPy Window object, required portName : the (virtual) serial port to which
the device is
connected. If None then PsychoPy will search available serial ports and test commu-
nication (on OSX, the first match of /dev/tty.usbmodemfa* will be used and on linux
/dev/ttyS0 will be used
mode : ‘bits++’, ‘color++’, ‘mono++’, ‘status’ checkConfigLevel : integer
Allows you to specify how much checking of the device is done to ensure a valid identity
look-up table. If you specify one level and it fails then the check will be escalated to the
next level (e.g. if we check level 1 and find that it fails we try to find a new LUT):
• 0 don’t check at all
• 1 check that the graphics driver and OS version haven’t changed since last LUT
calibration
• 2 check that the current LUT calibration still provides identity (requires switch
to status mode)
• 3 search for a new identity look-up table (requires switch to status mode)
RTBoxAddKeys(map)
Add key mappings to an existing map. RTBox events can be mapped to a number of physical events on
Bits# They can be mapped to digital input lines, triggers and CB6 IR input channels. The format for map
is a list of tuples with each tuple containing the name of the RTBox button to be mapped and its source eg
(‘btn1’,’Din1’) maps physical input Din1 to logical button btn1. RTBox has four logical buttons (btn1-4)
and three auxiliary events (light, pulse and trigger) Buttons/events can be mapped to multiple physical
inputs and stay mapped until reset.
Example:
bits.RTBoxSetKeys([('btn1','Din0),('btn2','Din1')])
bits.RTBoxAddKeys([('btn1','IRButtonA'),(('btn2','IRButtonB')])
Will link Din0 to button 1 and Din1 to button 2. Then adds IRButtonA and IRButtonB alongside the
original mappings.
Now both hard wired and IR inputs will - emulating the same logical button press.
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. Such variations may affect key mappings for RTBox
commands.
RTBoxCalibrate(N=1)
Used to assess error between host clock and Bits# button press time stamps.
Prints each sample provided and returns the mean error.
The clock willnever be completely in sync but the aim is that there should be that the difference between
them should not grow over a serise of button presses.
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. Such variations may affect key mappings for RTBox
commands.
RTBoxClear()
Flushes the serial input buffer. Its good to do this before and after data collection. This just calls flush() so
is a wrapper for RTBox.
RTBoxDisable()
Disables the detection of RTBox events. This is useful to stop the Bits# from reporting key presses When
you no longer need them. Nad must be done before using any other data logging methods.
It undoes any button - input mappings.
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. Such variations may affect key mappings for RTBox
commands.
The ability to reset keys mappings has been found not to work on some Bits# firmware.
RTBoxEnable(mode=None, map=None)
Sets up the RTBox with preset or bespoke mappings and enables event detection.
RTBox events can be mapped to a number of physical events on Bits# They can be mapped to digital input
lines, tigers and CB6 IR input channels.
Mode is a list of strings. Preset mappings provided via mode:
CB6 for the CRS CB6 IR response box. IO for a three button box connected to Din0-2 IO6 for a
six button box connected to Din0-5
If mode = None or is not set then the value of self.RTBoxMode is used.
Bespoke Mappings over write preset ones.
The format for map is a list of tuples with each tuple containing the name of the RT Box button to be
mapped and its source eg (‘btn1’,’Din0’) maps physical input Din0 to logical button btn1.
Note the lowest number button event is Btn1
RTBox has four logical buttons (btn1-4) and three auxiliary events (light, pulse and trigger) Buttons/events
can be mapped to multiple physical inputs and stay mapped until reset.
Mode is a list of string or list of strings that contains keywords to determine present mappings and modes
for RTBox.
If mode includes ‘Down’ button events will be detected when pressed. If mode includes ‘Up’ button events
will be detected when released. You can detect both types of event but note that pulse, light and trigger
events don’t have an ‘Up’ mode.
If Trigger is included in mode the trigger event will be mapped to the trigIn connector.
Example
Example
bits.RTBoxEnable(mode = [‘Down’,’CB6’])
enable the RTBox emulation to detect Down events on the standard CB6 IR response box keys.
If no key direction has been set (mode does not contain ‘Up’ or ‘Down’) the default is ‘Down’.
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. Such variations may affect key mappings for RTBox
commands.
The ability to reset keys mappings has been found not to work on some Bits# firmware.
RTBoxKeysPressed(N=1)
Check to see if (at least) the appropriate number of RTBox style key presses have been made.
Example
bits.RTBoxKeysPressed(5)
will return false until 5 button presses have been recorded.
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. Such variations may affect key mappings for RTBox
commands.
RTBoxResetKeys()
Resets the key mappings to no mapping. Has the effect of disabling RTBox input.
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. Such variations may affect key mappings for RTBox
commands.
The ability to reset keys mappings has been found not to work on some Bits# firmware.
RTBoxSetKeys(map)
Set key mappings: first resets existing then adds new ones. Does not reset any event that is not in the new
list. RTBox events can be mapped to a number of physical events on Bits# They can be mapped to digital
input lines, triggers and CB6 IR input channels. The format for map is a list of tuples with each tuple
containing the name of the RTBox button to be mapped and its source eg (‘btn1’,’Din1’) maps physical
input Din1 to logical button btn1.
RTBox has four logical buttons (btn1-4) and three auxiliary events (light, pulse and trigger) Buttons/events
can be mapped to multiple physical inputs and stay mapped until reset.
Example
bits.RTBoxSetKeys([(‘btn1’,’Din0),(‘light’,’Din9’)])
Will link Din0 to button 1 and Din9 to the the light input emulation.
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. Such variations may affect key mappings for RTBox
commands.
RTBoxWait()
Waits until (at least) one of RTBox style key presses have been made Pauses program execution in mean
time.
Example
res = bits.RTBoxWait()
will suspend all other activity until 1 button press has been recorded and will then return a dict / strcuture
containing results.
Results can be accessed as follows:
structure res.dir, res.button, res.time
or dictionary res[‘dir’], res[‘button’], res[‘time’]
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. Such variations may affect key mappings for RTBox
commands.
RTBoxWaitN(N=1)
Waits until (at least) the appropriate number of RTBox style key presses have been made Pauses program
execution in mean time.
Example
res = bits.RTBoxWaitN(5)
will suspend all other activity until 5 button presses have been recorded and will then return a list of Dicts
containing the 5 results.
Results can be accessed as follows:
structure res[0].dir, res[0].button, res[0].time
or dictionary res[0][‘dir’], res[0][‘button’], res[0][‘time’]
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. Such variations may affect key mappings for RTBox
commands.
_Goggles()
(private) Used to set control the goggles. Should not be needed by user if attached to a psychopy.
visual.Window()
_RTBoxDecodeResponse(msg, N=1)
Helper function for decoding key presses in the RT response box format.
Not normally needed by user
_ResetClock()
(private) Used to reset Bits hardware clock. Should not be needed by user if attached to a psychopy.
visual.Window() since this will automatically draw the reset code as part of the screen refresh.
_drawLUTtoScreen()
(private) Used to set the LUT in ‘bits++’ mode. Should not be needed by user if attached to a psychopy.
visual.Window() since this will automatically draw the LUT as part of the screen refresh.
_drawTrigtoScreen(sendStr=None)
(private) Used to send a trigger pulse. Should not be needed by user if attached to a psychopy.visual.
Window() since this will automatically draw the trigger code as part of the screen refresh.
_extractStatusEvents()
Interprets values from status log to pullout any events.
Should not be needed by user if start/stopStatusLog or pollStatus are used
Fills statusEvents with a list of dictionary like objects with the following entries source, input, direction,
time.
source = the general source of the event - e.g. DIN for Digital input, IR for IT response box
input = the individual input in the source. direction = ‘up’ or ‘down’ time = time stamp.
Events are recorded relative to the four event flags statusDINBase, initial values for ditgial ins. sta-
tusIRBase, initial values for CB6 IR box. statusTrigInBase, initial values for TrigIn. statusMode,
direction(s) of events to be reported.
The data can be accessed as statusEvents[i][‘time’] or statusEvents[i].time
Also set status._nEvents to the number of events recorded
_getStatusLog()
Read the log Queue
Should not be needed by user if start/stopStatusLog or pollStatus are used.
fills statusValues with a list of dictionary like objects with the following entries: sample, time, trigIn,
DIN[10], DWORD, IR[6], ADC[6]
They can be accessed as statusValues[i][‘sample’] or statusValues[i].sample, statusValues[i].ADC[j]
Also sets status_nValues to the number of values recorded.
_inWaiting()
Helper function to determine how many bytes are waiting on the serial port.
_protectTrigger()
If Goggles (or analog) outputs are used when the digital triggers are off we need to make a set of blank
triggers first. But the user might have set up triggers in waiting for a later time. So this will protect them.
_restoreTrigger()
Restores the triggers to previous settings
_setHeaders(frameRate)
Sets up the TLock header codes and some flags that are common to operating all CRS devices
_setupShaders()
creates and stores the shader programs needed for mono++ and color++ modes
_statusBox()
Should not normally be called by user Called in its own thread via self.statusBoxEnable() Reads the status
reports from the Bits# for default 60 seconds or until self.statusBoxDisable() is called.
Note any non status reports are found on the buffer will cause an error.
args specifies the time over which to record status events. The minimum time is 10ms, less than this results
in recording stopping after about 1 status report has been read.
Puts its results into a Queue.
This function is normally run in its own thread so actions can be asynchronous.
_statusDisable()
Stop Bits# from recording data - and clears the buffer
Not normally needed by user
_statusEnable()
Sets the Bits# to continuously send back its status until stopped. You get a lot a data by leaving this going.
Not normally needed by user
_statusLog(args=60)
Should not normally be called by user Called in its own thread via self.startStatusLog() Reads the status
reports from the Bits# for default 60 seconds or until self.stopStatusLog() is called. Ignores the last line as
this is can be bogus. Note any non status reports are found on the buffer will cause an error.
args specifies the time over which to record status events. The minimum time is 10ms, less than this results
in recording stopping after about 1 status report has been read.
Puts its results into a Queue.
This function is normally run in its own thread so actions can be asynchronous.
beep(freq=800, dur=1)
Make a beep of a given frequency and duration
checkConfig(level=1, demoMode=False, logFile=”)
Checks whether there is a configuration for this device and whether it’s correct :params:
level: integer 0: do nothing 1: check that we have a config file and that the graphics
card and operating system match that specified in the file. Then assume identity LUT is
correct
2: switch the box to status mode and check that the identity LUT is currently working
Example
res = getAllRTBoxResponses()
res[0].dir, res[0].button, res[0].time
or dictionary:
Note even if only 1 key press was found a list of dict / objects is returned
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. Such variations may affect key mappings for RTBox
commands.
getAllStatusBoxResponses()
Read all of the statusBox style key presses on the input buffer. Returns a list of dict like objects with three
members ‘button’, ‘dir’ and ‘time’
‘button’ is a number from 1 to 9 to indicate the event that was detected. 1-17 are the ‘btn1-btn17’ events.
‘dir’ is the direction of the event eg ‘up’ or ‘down’, trigger is described as ‘on’ when low.
‘dir’ is set to ‘time’ if a requested timestamp event has been detected.
‘time’ is the timestamp associated with the event.
Values can be read as a structure eg:
res= getAllStatusBoxResponses()
res[0].dir, res[0].button, res[0].time
or dictionary:
Note even if only 1 key press was found a list of dict / objects is returned.
Note that the firmware in Bits# units varies over time and some features of this class may not work for
all firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
getAllStatusEvents()
Returns the whole status event list
Returns a list of dictionary like objects with the following entries source, input, direction, time.
source = the general source of the event - e.g. DIN for Digital input, IR for CB6 IR response box events
input = the individual input in the source. direction = ‘up’ or ‘down’ time = time stamp.
All sourses are numbered from zero. Din 0 . . . 9 IR 0 . . . 5 ADC 0 . . . 5
mode specifies which directions of events are captured. e.g ‘up’ will only report up events.
The data can be accessed as value[i][‘time’] or value[i].time
Example
Example
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
getAnalog(N=0)
Pulls out the values of the analog inputs for the Nth status entry.
Returns a dictionary with a list of 6 floats (ADC) and a time stamp (time).
All sourses are numbered from zero. ADC 0 . . . 5
Example
Example
Example
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
getIRBox(N=0)
Pulls out the values of the CB6 IR response box inputs for the Nth status entry.
Returns a dictionary with a list of 6 ints that are 1 or 0 (IRBox) and a time stamp (time).
ll sourses are numbered from zero. IR 0 . . . 5
Example
Example
info=bits.getInfo print(info[‘ProductType’])
getPackets()
Returns the number of packets available for trigger pulses.
getRTBoxResponse()
checks for one RTBox style key presses on the input buffer then reads it. Returns a dict like object with
three members ‘button’, ‘dir’ and ‘time’
‘button’ is a number from 1 to 9 to indicate the event that was detected. 1-4 are the ‘btn1-btn4’ events,
5 and 6 are the ‘light’ and ‘pulse’ events, 7 is the ‘trigger’ event, 9 is a requested timestamp event (see
Clock()).
‘dir’ is the direction of the event eg ‘up’ or ‘down’, trigger is described as ‘on’ when low.
‘dir’ is set to ‘time’ if a requested timestamp event has been detected.
‘time’ is the timestamp associated with the event.
Value can be read as a structure, eg: res= getRTBoxResponse() res.dir, res.button, res.time
or dictionary res[‘dir’], res[‘button’], res[‘time’]
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. Such variations may affect key mappings for RTBox
commands.
getRTBoxResponses(N=1)
checks for (at least) an appropriate number of RTBox style key presses on the input buffer then reads them.
Returns a list of dict like objects with three members ‘button’, ‘dir’ and ‘time’
‘button’ is a number from 1 to 9 to indicate the event that was detected. 1-4 are the ‘btn1-btn4’ events,
5 and 6 are the ‘light’ and ‘pulse’ events, 7 is the ‘trigger’ event, 9 is a requested timestamp event (see
Clock()).
‘dir’ is the direction of the event eg ‘up’ or ‘down’, trigger is described as ‘on’ when low.
‘dir’ is set to ‘time’ if a requested timestamp event has been detected.
‘time’ is the timestamp associated with the event.
Values can be read as a list of structures eg:
res = getRTBoxResponses(3)
res[0].dir, res[0].button, res[0].time
or dictionaries:
Note even if only 1 key press was requested a list of dict / objects is returned.
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this class
makes certain assumptions about the configuration. Such variations may affect key mappings for RTBox
commands.
getResponse(length=1, timeout=0.1)
Read the latest response from the serial port
Params
length determines whether we expect: 1: a single-line reply (use readline()) 2: a multiline reply (use
readlines() which requires timeout) -1: may not be any EOL character; just read whatever chars are
there
getStatus(N=0)
Pulls out the Nth entry in the statusValues list.
Returns a dict like object with the following entries sample, time, trigIn, DIN[10], DWORD, IR[6],
ADC[6]
sample is the sample ID number. time is the time stamp. trigIn is the value of the trigger input. DIN is
a list of 10 digital input values. DWORD represents the digital inputs as a single decimal value. IR is a
list of 10 infra-red (IR) input values. ADC is a list of 6 analog input values. These can be accessed as
value[‘sample’] or value.sample, values.ADC[j].
All sourses are numbered from zero. Din 0 . . . 9 IR 0 . . . 5 ADC 0 . . . 5
Example
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
getStatusBoxResponse()
checks for one statusBox style key presses on the input buffer then reads it. Returns a dict like object with
three members ‘button’, ‘dir’ and ‘time’
‘button’ is a number from 1 to 9 to indicate the event that was detected. 1-17 are the ‘btn1-btn17’ events.
‘dir’ is the direction of the event eg ‘up’ or ‘down’, trigger is described as ‘on’ when low.
‘dir’ is set to ‘time’ if a requested timestamp event has been detected.
‘time’ is the timestamp associated with the event.
Value can be read as a structure, eg: res= getRTBoxResponse() res.dir, res.button, res.time
or dictionary res[‘dir’], res[‘button’], res[‘time’]
Note that the firmware in Bits# units varies over time and some features of this class may not work for
all firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
getStatusBoxResponses(N=1)
checks for (at least) an appropriate number of RTBox style key presses on the input buffer then reads them.
Returns a list of dict like objects with three members ‘button’, ‘dir’ and ‘time’
‘button’ is a number from 1 to 9 to indicate the event that was detected. 1-4 are the ‘btn1-btn4’ events,
5 and 6 are the ‘light’ and ‘pulse’ events, 7 is the ‘trigger’ event, 9 is a requested timestamp event (see
Clock()).
‘dir’ is the direction of the event eg ‘up’ or ‘down’, trigger is described as ‘on’ when low.
‘dir’ is set to ‘time’ if a requested timestamp event has been detected.
‘time’ is the timestamp associated with the event.
Values can be read as a list of structures eg:
res = getRTBoxResponses(3)
print(res[0].dir, res[0].button, res[0].time)
or dictionaries:
Note even if only 1 key press was requested a list of dict / objects is returned.
Note that the firmware in Bits# units varies over time and some features of this class may not work for
all firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
getStatusEvent(N=0)
pulls out the Nth event from the status event list
Returns a dictionary like object with the following entries source, input, direction, time.
source = the general source of the event - e.g. DIN for Digital input, IR for IT response box.
input = the individual input in the source. direction = ‘up’ or ‘down’ time = time stamp.
Example
Example
isAwake()
Test whether we have an active connection on the virtual serial port
isOpen
longName = ''
mode
Get/set the mode of the BitsSharp to one of – “bits++” “mono++” “color++” “status” “storage” “auto”
monitorEDID
Get / set the EDID file for the monitor. The edid files will be located in the EDID subdirectory of the flash
disk. The file automatic.edid will be the file read from the connected monitor.
Example
bits.pollStatus() print(bits.statusValues[0].IR[0])
will display the value of the IR InputA in the first sample recorded.
Note: Starts and stops logging for itself.
Note that the firmware in Bits# units varies over time and some features of this class may not work for
all firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
primeClock()
Primes the clock to reset at the next screen flip - note only 1 clock reset signal will be issued but if the
frame(s) after the reset frame is dropped the reset will be re-issued thus keeping timing good.
Resets continute to be issued on each video frame until the next win.flip so you need to have regular
win.flips for this function to work properly.
Example bits.primeClock() drawImage while not response
#do some processing bits.win.flip()
Will get a clock reset signal ready but wont issue it until the first win.flip in the loop.
read(timeout=0.1)
Get the current waiting characters from the serial port if there are any.
Mostly used internally but may be needed by user. Note the return message depends on what state the
device is in and will need to be decoded. See the Bits# manual but also the other functions herein that do
the decoding for you.
Example
message = bits.read()
reset()
Deprecated: This was used on the old Bits++ to power-cycle the box. It required the compiled dll, which
only worked on windows and doesn’t work with Bits# or Display++.
resetClock()
Issues a clock reset code using 1 screen flip if the next frame(s) is dropped the reset will be re-issued thus
keeping timing good.
Resets continute to be issued on each video frame until the next win.flip so you need to have regular
win.flips for this function to work properly.
Example
Example
Example
bits.sendAnalog(4.5,-2.0) bits.win.flip()
sendMessage(message, autoLog=True)
Send a command to the device (does not wait for a reply or sleep())
sendTrigger(triggers=0, onTime=0, duration=0, mask=65535)
Sends a single trigger using up 1 win.flip. The trigger will be sent on the following frame.
The triggers will continue until after the next win.flip.
Actions are always 1 frame after the request.
May do odd things if Goggles and Analog are also in use.
Example
setAnalog(AOUT1=0, AOUT2=0)
Sets up Analog outputs in Bits# AOUT1 and AOUT2 are the two analog values required in volts. Analog
comands are issued at the next win.flip() and actionsed 1 video frame later.
Example
Examples
setContrast(1.0,0.5) will set the central 50% of the LUT so that a stimulus with contr=0.5 will
actually be drawn with contrast 1.0
setContrast(1.0,[0.25,0.5]) setContrast(1.0,[63,127])
will set the lower-middle quarter of the LUT (which might be useful in LUT animation
paradigms)
setGamma(newGamma)
Set the LUT to have the requested gamma value Currently also resets the LUT to be a linear contrast ramp
spanning its full range. May change this to read the current LUT, undo previous gamma and then apply
new one?
setLUT(newLUT=None, gammaCorrect=False, LUTrange=1.0, contrast=None)
SetLUT is only really needed for bits++ mode of bits# to set the look-up table (256 values with 14bits
each). For the BitsPlusPlus device the default is to perform gamma correction here but on the BitsSharp
it seems better to have the device perform that itself as the last step so gamma correction is off here by
default. If no contrast has yet been set (it isn’t needed for other modes) then it will be set to 1 here.
setRTBoxMode(mode=[’CB6’, ’Down’, ’Trigger’])
Sets the RTBox mode data member - does not actually set the RTBox into this mode.
Example
Example
sets the statusBox mode settings for a CRS CB6 button box. and for detection of ‘Down’ events only.
Note that the firmware in Bits# units varies over time and some features of this class may not work for
all firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
setStatusBoxThreshold(threshold=None)
Sets the threshold by which analog inputs must change to trigger a button press event. If None the threshold
will be set very high so that no such events are triggered.
Can be used to change the threshold for analog events without having to re enable the status box system as
a whole.
Note that the firmware in Bits# units varies over time and some features of this class may not work for
all firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
setStatusEventParams(DINBase=1023, IRBase=63, TrigInBase=0, ADCBase=0, thresh-
old=9999.99, mode=[’up’, ’down’])
Sets the parameters used to determine if a status value represents a reportable event.
DIN_base = a 10 bit binary word specifying the expected starting values of the 10 digital input lines
IR_base = a 6 bit binary word specifying the expected starting values of the 6 CB6 IR buttons
Trig_base = the starting value of the Trigger input
mode = a list of event types to monitor can be ‘up’ or ‘down’ typically ‘down’ corresponds to a button
press or when the input is being pulled down to zero volts.
Example
onTime specifies the start time of the trigger within the frame (in S with 100uS resolution)
Duration specifies how long the trigger will last. (in S with 100uS resolution).
Note that mask only protects the digital output lines set by other activities in the Bits. Not other triggers.
Example
Example
Example
Example
Example
Example
Example
bits.RTBoxSetKeys([('btn1','Din0),('btn2','Din1')])
bits.RTBoxAddKeys([('btn1','IRButtonA'),(('btn2','IRButtonB')])
Will link Din0 to button 1 and Din1 to button 2. Then adds IRButtonA and IRButtonB alongside the
original mappings.
Now both hard wired and IR inputs will emulate the same logical button press.
To match with the CRS hardware description inputs are labelled as follows.
TrigIn, Din0 . . . Din9, IRButtonA . . . IRButtonF, AnalogIn1 . . . AnalogIn6
Logical buttons are numbered from 1 to 23.
Note that the firmware in Bits# units varies over time and some features of this class may not work for
all firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
statusBoxDisable()
Disables the detection of statusBox events. This is useful to stop the Bits# from reporting key presses
When you no longer need them. And must be done before using any other data logging methods.
It undoes any button - input mappings
statusBoxEnable(mode=None, map=None, threshold=None)
Sets up the stautsBox with preset or bespoke mappings and enables event detection.
stautsBox events can be mapped to a number of physical events on Bits# They can be mapped to digital
input lines, tigers and CB6 IR input channels.
mode is a list of strings. Preset mappings provided via mode:
CB6 for the CRS CB6 IR response box connected mapped to btn1-6 IO for a three button box
connected to Din0-2 mapped to btn1-3 IO6 for a six button box connected to Din0-5 mapped to
btn1-6 IO10 for a ten button box connected to Din0-9 mapped to btn1-10 Trigger maps the trigIn
to btn17 Analog maps the 6 analog inputs on a Bits# to btn18-23
if CB6 and IOx are used together the Dins are mapped from btn7 onwards.
If mode = None or is not set then the value of self.statusBoxMode is used.
Bespoke Mappings over write preset ones.
The format for map is a list of tuples with each tuple containing the name of the button to be mapped and
its source eg (‘btn1’,’Din0’) maps physical input Din0 to logical button btn1.
Note the lowest number button event is Btn1
statusBox has 23 logical buttons (btn1-123). Buttons/events can be mapped to multiple physical inputs
and stay mapped until reset.
mode is a string or list of strings that contains keywords to determine present mappings and modes for
statusBox.
If mode includes ‘Down’ button events will be detected when pressed. If mode includes ‘Up’ button events
will be detected when released. You can detect both types of event noting that the event detector will look
for transitions and ignorewhat it sees as the starting state.
To match with the CRS hardware description inputs are labelled as follows.
TrigIn, Din0 . . . Din9, IRButtonA . . . IRButtonF, AnalogIn1 . . . AnalogIn6
Logical buttons are numbered from 1 to 23.
threshold sets the threshold by which analog inputs must change to trigger a button press event. If None
the threshold will be set very high so that no such events are triggered. Analog inputs must cycle up and
down by threshold to be detected as separate events. So if only ‘Up’ events are detected the input must go
up by threshold, then come down again and then go back up to register 2 up events.
Example
Example
bits.statusBoxEnable(mode = [‘Down’,’CB6’])
enable the status Box emulation to detect Down events on the standard CB6 IR response box keys.
Note that the firmware in Bits# units varies over time and some features of this class may not work for
all firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
statusBoxKeysPressed(N=1)
Check to see if (at least) the appropriate number of RTBox style key presses have been made.
Example
bits.statusBoxKeysPressed(5)
will return false until 5 button presses have been recorded.
Note that the firmware in Bits# units varies over time and some features of this class may not work for
all firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
statusBoxResetKeys()
statusBoxSetKeys(map)
Set key mappings: first resets existing then adds new ones. Does not reset any event that is not in the
new list. statusBox events can be mapped to a number of physical events on Bits# They can be mapped
to digital input lines, triggers and CB6 IR input channels. The format for map is a list of tuples with
each tuple containing the name of the RTBox button to be mapped and its source eg (‘btn1’,’Din1’) maps
physical input Din1 to logical button btn1.
statusBox has 17 logical buttons (btn1-17) Buttons/events can be mapped to multiple physical inputs and
stay mapped until reset.
Example
bits.RTBoxSetKeys([(‘btn1’,’Din0),(‘btn2’,’IRButtonA’)])
Will link physical Din0 to logical button 1 and IRButtonA to button 2.
To match with the CRS hardware description inputs are labelled as follows.
TrigIn, Din0 . . . Din9, IRButtonA . . . IRButtonF, AnalogIn1 . . . AnalogIn6
Logical buttons are numbered from 1 to 23.
Note that the firmware in Bits# units varies over time and some features of this class may not work for
all firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
statusBoxWait()
Waits until (at least) one of RTBox style key presses have been made Pauses program execution in mean
time.
Example
res = bits.statusBoxWait()
will suspend all other activity until 1 button press has been recorded and will then return a dict / strcuture
containing results.
Results can be accessed as follows:
structure res.dir, res.button, res.time
or dictionary res[‘dir’], res[‘button’], res[‘time’]
Note that the firmware in Bits# units varies over time and some features of this class may not work for all
firmware versions. Also DBits# units can be configured in various ways via their config.xml file so this
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
statusBoxWaitN(N=1)
Waits until (at least) the appropriate number of RTBox style key presses have been made Pauses program
execution in mean time.
Example
res = bits.statusBoxWaitN(5)
will suspend all other activity until 5 button presses have been recorded and will then return a list of Dicts
containing the 5 results.
Results can be accessed as follows:
structure:
or dictionary:
Note that the firmware in Bits# units varies over time and some features of this class may not work for
all firmware versions. Also Bits# units can be configured in various ways via their config.xml file so this
class makes certain assumptions about the configuration. In particular it is assumed that all digital inputs,
triggers and analog inputs are reported as part of status updates. If some of these report are disabled in
your config.xml file then ‘status’ and ‘event’ commands in this class may not work.
stop()
[Not currently implemented] Used to stop event collection by the device.
Not really needed as other members now do this.
stopAnalog()
will stop sending analogs signals at the next win flip. Example:
bits.set Analog(4.5,-2.2) bits.startAnalog() bits.win.flip() while not response:
#do some processing. bits.win.flip()
bits.stopAnalog() bits.win.flip()
stopGoggles()
Stop the stereo goggles from toggling
Example
Example
Example
ColorCAL
Attributes
Details
calibrateZero()
Perform a calibration to zero light.
For early versions of the ColorCAL this had to be called after connecting to the device. For later versions
the dark calibration was performed at the factory and stored in non-volatile memory.
You can check if you need to run a calibration with:
ColorCAL.getNeedsCalibrateZero()
driverFor = ['colorcal']
getCalibMatrix()
Get the calibration matrix from the device, needed for transforming measurements into real-world values.
This is normally retrieved during __init__ and stored as ColorCal.calibMatrix so most users don’t need to
call this function.
getInfo()
Queries the device for information
usage::
(ok, serialNumber, firmwareVersion, firmwareBuild) = colorCal.getInfo()
ok will be True/False Other values will be a string or None.
getLum()
Conducts a measurement and returns the measured luminance
getNeedsCalibrateZero()
Check whether the device needs a dark calibration
In initial versions of CRS ColorCAL mkII the device stored its zero calibration in volatile memory and
needed to be calibrated in darkness each time you connected it to the USB
This function will check whether your device requires that (based on firmware build number and whether
you’ve already done it since python connected to the device).
Returns True or False
longName = 'CRS ColorCAL'
measure()
Conduct a measurement and return the X,Y,Z values
Usage:
ok, X, Y, Z = colorCal.measure()
Following a call to measure, the values ColorCAL.lastLum will also be populated with, for compatibility
with other devices used by PsychoPy (notably the PR650/PR655)
readline(size=None, eol=’\n\r’)
This should be used in place of the standard serial.Serial.readline() because that doesn’t allow us to set the
eol character
sendMessage(message, timeout=0.1)
Send a command to the photometer and wait an alloted timeout for a response.
For examples on usage see the example_simple and example_multi files on the egi github repository
For an example see the demos menu of the PsychoPy Coder For further documentation see the pynetstation website
Idea: Run or debug an experiment script using exactly the same code, i.e., for both testing and online data acquisition.
To debug timing, you can emulate sync pulses and user responses. Limitations: pyglet only; keyboard events only.
class psychopy.hardware.emulator.ResponseEmulator(simResponses=None)
Class to allow simulation of a user’s keyboard responses during a scan.
Given a list of response tuples (time, key), the thread will simulate a user pressing a key at a specific time
(relative to the start of the run).
Author: Jeremy Gray; Idea: Mike MacAskill
_delete()
Remove current thread from the dict of currently running threads.
_set_tstate_lock()
Set a lock object which will be released by the interpreter when the underlying thread state (see pystate.h)
gets deleted.
daemon
A boolean value indicating whether this thread is a daemon thread.
This must be set before start() is called, otherwise RuntimeError is raised. Its initial value is inherited from
the creating thread; the main thread is not a daemon thread and therefore all threads created in the main
thread default to daemon = False.
The entire Python program exits when no alive non-daemon threads are left.
ident
Thread identifier of this thread or None if it has not been started.
This is a nonzero integer. See the thread.get_ident() function. Thread identifiers may be recycled when a
thread exits and another thread is created. The identifier is available even after the thread has exited.
isAlive()
Return whether the thread is alive.
This method returns True just before the run() method starts until just after the run() method terminates.
The module function enumerate() returns a list of all alive threads.
is_alive()
Return whether the thread is alive.
This method returns True just before the run() method starts until just after the run() method terminates.
The module function enumerate() returns a list of all alive threads.
join(timeout=None)
Wait until the thread terminates.
This blocks the calling thread until the thread whose join() method is called terminates – either normally
or through an unhandled exception or until the optional timeout occurs.
When the timeout argument is present and not None, it should be a floating point number specifying a
timeout for the operation in seconds (or fractions thereof). As join() always returns None, you must call
isAlive() after join() to decide whether a timeout happened – if the thread is still alive, the join() call timed
out.
When the timeout argument is not present or None, the operation will block until the thread terminates.
A thread can be join()ed many times.
join() raises a RuntimeError if an attempt is made to join the current thread as that would cause a deadlock.
It is also an error to join() a thread before it has been started and attempts to do so raises the same exception.
name
A string used for identification purposes only.
It has no semantics. Multiple threads may be given the same name. The initial name is set by the construc-
tor.
run()
Method representing the thread’s activity.
You may override this method in a subclass. The standard run() method invokes the callable object passed
to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken
from the args and kwargs arguments, respectively.
start()
Start the thread’s activity.
It must be called at most once per thread object. It arranges for the object’s run() method to be invoked in
a separate thread of control.
This method will raise a RuntimeError if called more than once on the same thread object.
class psychopy.hardware.emulator.SyncGenerator(TR=1.0, TA=1.0, volumes=10,
sync=’5’, skip=0, sound=False,
**kwargs)
Class for a character-emitting metronome thread (emulate MR sync pulse).
Aim: Allow testing of temporal robustness of fMRI scripts by emulating a hardware sync pulse. Adds an
arbitrary ‘sync’ character to the key buffer, with sub-millisecond precision (less precise if CPU is maxed).
Recommend: TR=1.000 or higher and less than 100% CPU. Shorter TR –> higher CPU load.
Parameters
• TR – seconds between volume acquisitions
• TA – seconds to acquire one volume
• volumes – number of 3D volumes to obtain in a given scanning run
• sync – character used as flag for sync timing, default=‘5’
• skip – how many frames to silently omit initially during T1 stabilization, no sync pulse.
Not needed to test script timing, but will give more accurate feel to start of run. aka “disc-
dacqs”.
• sound – simulate scanner noise
_delete()
Remove current thread from the dict of currently running threads.
_set_tstate_lock()
Set a lock object which will be released by the interpreter when the underlying thread state (see pystate.h)
gets deleted.
daemon
A boolean value indicating whether this thread is a daemon thread.
This must be set before start() is called, otherwise RuntimeError is raised. Its initial value is inherited from
the creating thread; the main thread is not a daemon thread and therefore all threads created in the main
thread default to daemon = False.
The entire Python program exits when no alive non-daemon threads are left.
ident
Thread identifier of this thread or None if it has not been started.
This is a nonzero integer. See the thread.get_ident() function. Thread identifiers may be recycled when a
thread exits and another thread is created. The identifier is available even after the thread has exited.
isAlive()
Return whether the thread is alive.
This method returns True just before the run() method starts until just after the run() method terminates.
The module function enumerate() returns a list of all alive threads.
is_alive()
Return whether the thread is alive.
This method returns True just before the run() method starts until just after the run() method terminates.
The module function enumerate() returns a list of all alive threads.
join(timeout=None)
Wait until the thread terminates.
This blocks the calling thread until the thread whose join() method is called terminates – either normally
or through an unhandled exception or until the optional timeout occurs.
When the timeout argument is present and not None, it should be a floating point number specifying a
timeout for the operation in seconds (or fractions thereof). As join() always returns None, you must call
isAlive() after join() to decide whether a timeout happened – if the thread is still alive, the join() call timed
out.
When the timeout argument is not present or None, the operation will block until the thread terminates.
A thread can be join()ed many times.
join() raises a RuntimeError if an attempt is made to join the current thread as that would cause a deadlock.
It is also an error to join() a thread before it has been started and attempts to do so raises the same exception.
name
A string used for identification purposes only.
It has no semantics. Multiple threads may be given the same name. The initial name is set by the construc-
tor.
run()
Method representing the thread’s activity.
You may override this method in a subclass. The standard run() method invokes the callable object passed
to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken
from the args and kwargs arguments, respectively.
start()
Start the thread’s activity.
It must be called at most once per thread object. It arranges for the object’s run() method to be invoked in
a separate thread of control.
This method will raise a RuntimeError if called more than once on the same thread object.
psychopy.hardware.emulator.launchScan(win, settings, globalClock=None, simRe-
sponses=None, mode=None, esc_key=’escape’,
instr=’select Scan or Test, press enter’,
wait_msg=’waiting for scanner...’, wait_timeout=300,
log=True)
Accepts up to four fMRI scan parameters (TR, volumes, sync-key, skip), and launches an experiment in one of
two modes: Scan, or Test.
Usage See Coder Demo -> experiment control -> fMRI_launchScan.py.
In brief: 1) from psychopy.hardware.emulator import launchScan; 2) Define your args; and 3)
add ‘vol = launchScan(args)’ at the top of your experiment script.
launchScan() waits for the first sync pulse and then returns, allowing your experiment script to proceed. The
key feature is that, in test mode, it first starts an autonomous thread that emulates sync pulses (i.e., emulated by
your CPU rather than generated by an MRI machine). The thread places a character in the key buffer, exactly
like a keyboard event does. launchScan will wait for the first such sync pulse (i.e., character in the key buffer).
launchScan returns the number of sync pulses detected so far (i.e., 1), so that a script can account for them
explicitly.
If a globalClock is given (highly recommended), it is reset to 0.0 when the first sync pulse is detected. If a mode
was not specified when calling launchScan, the operator is prompted to select Scan or Test.
If scan mode is selected, the script will wait until the first scan pulse is detected. Typically this would be coming
from the scanner, but note that it could also be a person manually pressing that key.
If test mode is selected, launchScan() starts a separate thread to emit sync pulses / key presses. Note that this
thread is effectively nothing more than a key-pressing metronome, emitting a key at the start of every TR, doing
so with high temporal precision.
If your MR hardware interface does not deliver a key character as a sync flag, you can still use launchScan() to
test script timing. You have to code your experiment to trigger on either a sync character (to test timing) or your
usual sync flag (for actual scanning).
Parameters win: a Window object (required)
• skip – how many frames to silently omit initially during T1 stabilization, no sync pulse.
Not needed to test script timing, but will give more accurate feel to start of run. aka “disc-
dacqs”.
• sound – simulate scanner noise
run()
Method representing the thread’s activity.
You may override this method in a subclass. The standard run() method invokes the callable object passed
to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken
from the args and kwargs arguments, respectively.
fORP fibre optic (MR-compatible) response devices by CurrentDesigns: http://www.curdes.com/ This class is only
useful when the fORP is connected via the serial port.
If you’re connecting via USB, just treat it like a standard keyboard. E.g., use a Keyboard component, and typically
listen for Allowed keys '1', '2', '3', '4', '5'. Or use event.getKeys().
class psychopy.hardware.forp.ButtonBox(serialPort=1, baudrate=19200)
Serial line interface to the fORP MRI response box.
To use this object class, select the box use setting serialPort, and connect the serial line. To emulate key
presses with a serial connection, use getEvents(asKeys=True) (e.g., to be able to use a RatingScale object during
scanning). Alternatively connect the USB cable and use fORP to emulate a keyboard.
fORP sends characters at 800Hz, so you should check the buffer frequently. Also note that the trigger event
numpy the fORP is typically extremely short (occurs for a single 800Hz epoch).
Parameters
serialPort : should be a number (where 1=COM1, . . . )
baud : the communication rate (baud), eg, 57600
classmethod _decodePress(pressCode)
Returns a list of buttons and whether they’re pressed, given a character code.
pressCode : A number with a bit set for every button currently pressed. Will be between 0 and 31.
_generateEvents(pressCode)
For a given button press, returns a list buttons that went from unpressed to pressed. Also flags any un-
pressed buttons as unpressed.
pressCode : a number with a bit set for every button currently pressed.
clearBuffer()
Empty the input buffer of all characters
clearStatus()
Resets the pressed statuses, so getEvents will return pressed buttons, even if they were already pressed in
the last call.
getEvents(returnRaw=False, asKeys=False, allowRepeats=False)
Returns a list of unique events (one event per button pressed) and also stores a copy of the full list of events
since last getEvents() (stored as ForpBox.rawEvts)
returnRaw : return (not just store) the full event list
asKeys : If True, will also emulate pyglet keyboard events, so that button 1 will register as a keyboard
event with value “1”, and as such will be detectable using event.getKeys()
allowRepeats : If True, this will return pressed buttons even if they were held down between calls to
getEvents(). If the fORP is on the “Eprime” setting, you will get a stream of button presses while a
button is held down. On the “Bitwise” setting, you will get a set of all currently pressed buttons every
time a button is pressed or released. This option might be useful if you think your participant may be
holding the button down before you start checking for presses.
getUniqueEvents(fullEvts=False)
Returns a Python set of the unique (unordered) events of either a list given or the current rawEvts buffer
8.9.6 iolab
This provides a basic ButtonBox class, and imports the ioLab python library.
class psychopy.hardware.iolab.ButtonBox
PsychoPy’s interface to ioLabs.USBBox. Voice key completely untested.
Original author: Jonathan Roberts PsychoPy rewrite: Jeremy Gray, 2013
Class to detect and report ioLab button box.
The ioLabs library needs to be installed. It is included in the Standalone distributions of PsychoPy as of version
1.62.01. Otherwise try “pip install ioLabs”
Usage:
For examples see the demos menu of the PsychoPy Coder or go to the URL above.
All times are reported in units of seconds.
_getTime(log=False)
Return the time on the bbox internal clock, relative to last reset.
Status: rtcget() not working
log=True will log the bbox time and elapsed CPU (python) time.
clearEvents()
Discard all button / voice key events.
getBaseTime()
Return the time since init (using the CPU clock, not ioLab bbox).
Aim is to provide a similar API as for a Cedrus box. Could let both clocks run for a long time to assess
relative drift.
getEnabled()
Return a list of the buttons that are currently enabled.
getEvents(downOnly=True)
Detect and return a list of all events (likely just one); no block.
Use downOnly=False to include button-release events.
resetClock(log=True)
Reset the clock on the bbox internal clock, e.g., at the start of a trial.
~1ms for me; logging is much faster than the reset
AT THE MOMENT JOYSTICK DOES NOT APPEAR TO WORK UNDER PYGLET. We need someone motivated
and capable to go and get this right (problem is with event polling under pyglet)
Control joysticks and gamepads from within PsychoPy.
You do need a window (and you need to be flipping it) for the joystick to be updated.
Known issues:
• currently under pyglet the joystick axes initialise to a value of zero and stay like this until the first time that
axis moves
• currently pygame (1.9.1) spits out lots of debug messages about the joystick and these can’t be turned off
:-/
Typical usage:
get_right_shoulder()
Get right ‘shoulder’ trigger state.
Returns bool, True if pressed down
get_right_thumbstick()
Get the state of the right joystick button; activated by pressing down on the stick.
Returns bool, True if pressed down
get_right_thumbstick_axis()
Get the axis displacement values of the right thumbstick.
Returns a tuple (X,Y) indicating thumbstick displacement between -1.0 and +1.0. Positive values indicate
the stick is displaced right or up.
Returns tuple, zero centered X, Y values.
get_start()
Get ‘start’ button state (button to the left of the ‘X’ button).
Returns bool, True if pressed down
get_trigger_axis()
Get the axis displacement values of both index triggers.
Returns a tuple (L,R) indicating index trigger displacement between -1.0 and +1.0. Values increase from
-1.0 to 1.0 the further a trigger is pushed.
Returns tuple, zero centered L, R values.
get_x()
Get the ‘X’ button state.
Returns bool, True if pressed down
get_y()
Get the ‘Y’ button state.
Returns bool, True if pressed down
psychopy.hardware.joystick.getNumJoysticks()
Return a count of the number of joysticks available.
class psychopy.hardware.joystick.Joystick(id)
An object to control a multi-axis joystick or gamepad.
Known issues Currently under pyglet backends the axis values initialise to zero rather than reading
the current true value. This gets fixed on the first change to each axis.
getAllAxes()
Get a list of all current axis values.
getAllButtons()
Get the state of all buttons as a list.
getAllHats()
Get the current values of all available hats as a list of tuples.
Each value is a tuple (x, y) where x and y can be -1, 0, +1
getAxis(axisId)
Get the value of an axis by an integer id.
(from 0 to number of axes - 1)
getButton(buttonId)
Get the state of a given button.
buttonId should be a value from 0 to the number of buttons-1
getHat(hatId=0)
Get the position of a particular hat.
The position returned is an (x, y) tuple where x and y can be -1, 0 or +1
getName()
Return the manufacturer-defined name describing the device.
getNumAxes()
Return the number of joystick axes found.
getNumButtons()
Return the number of digital buttons on the device.
getNumHats()
Get the number of hats on this joystick.
The GLFW backend makes no distinction between hats and buttons. Calling ‘getNumHats()’ will return
0.
getX()
Return the X axis value (equivalent to joystick.getAxis(0)).
getY()
Return the Y axis value (equivalent to joystick.getAxis(1)).
getZ()
Return the Z axis value (equivalent to joystick.getAxis(2)).
PsychoPy provides an interface to the labjack U3 class with a couple of minor additions.
This is accessible by:
Except for the additional setdata function the U3 class operates exactly as that in the U3 library that labjack provides,
documented here:
http://labjack.com/support/labjackpython
Note: To use labjack devices you do need also to install the driver software described on the page above
8.9.9 Minolta
maxAttempts: int If the device doesn’t respond first time how many attempts should be made?
If you’re certain that this is the correct port and the device is on and correctly configured
then this could be set high. If not then set this low.
Troubleshooting Various messages are printed to the log regarding the function of this device, but
to see them you need to set the printing of the log to the correct level:
If you’re using a keyspan adapter (at least on macOS) be aware that it needs a driver installed.
Otherwise no ports will be found.
Error messages:
ERROR: Couldn't connect to Minolta LS100/110 on ____: This likely
means that the device is not connected to that port (although the port has been found and
opened). Check that the device has the [ in the bottom right of the display; if not turn off
and on again holding the F key.
ERROR: No reply from LS100: The port was found, the connection was made and an
initial command worked, but then the device stopped communating. If the first measurement
taken with the device after connecting does not yield a reasonable intensity the device can
sulk (not a technical term!). The “[” on the display will disappear and you can no longer
communicate with the device. Turn it off and on again (with F depressed) and use a reason-
ably bright screen for your first measurement. Subsequent measurements can be dark (or
we really would be in trouble!!).
checkOK(msg)
Check that the message from the photometer is OK. If there’s an error show it (printed).
Then return True (OK) or False.
clearMemory()
Clear the memory of the device from previous measurements
getLum()
Makes a measurement and returns the luminance value
measure()
Measure the current luminance and set .lastLum to this value
sendMessage(message, timeout=5.0)
Send a command to the photometer and wait an allotted timeout for a response.
setMaxAttempts(maxAttempts)
Changes the number of attempts to send a message and read the output. Typically this should be low
initially, if you aren’t sure that the device is setup correctly but then, after the first successful reading, set
it higher.
setMode(mode=’04’)
Set the mode for measurements. Returns True (success) or False
‘04’ means absolute measurements. ‘08’ = peak ‘09’ = cont
See user manual for other modes
8.9.10 PhotoResearch
Supported devices:
• PR650
• PR655/PR670
PhotoResearch spectrophotometers See http://www.photoresearch.com/
NB psychopy.hardware.findPhotometer() will locate and return any supported device for you so
you can also do:
Troubleshooting Various messages are printed to the log regarding the function of this device, but
to see them you need to set the printing of the log to the correct level:
If you’re using a keyspan adapter (at least on macOS) be aware that it needs a driver installed.
Otherwise no ports will be found.
Also note that the attempt to connect to the PR650 must occur within the first few seconds after
turning it on.
getLastLum()
This retrieves the luminance (in cd/m**2) from the last call to .measure()
getLastSpectrum(parse=True)
This retrieves the spectrum from the last call to .measure()
If parse=True (default): The format is a num array with 100 rows [nm, power]
otherwise: The output will be the raw string from the PR650 and should then be passed to .
parseSpectrumOutput(). It’s more efficient to parse R,G,B strings at once than each individually.
getLum()
Makes a measurement and returns the luminance value
getSpectrum(parse=True)
Makes a measurement and returns the current power spectrum
If parse=True (default): The format is a num array with 100 rows [nm, power]
If parse=False (default): The output will be the raw string from the PR650 and should then be passed
to .parseSpectrumOutput(). It’s slightly more efficient to parse R,G,B strings at once than
each individually.
measure(timeOut=30.0)
Make a measurement with the device. For a PR650 the device is instructed to make a measurement and
then subsequent commands are issued to retrieve info about that measurement.
parseSpectrumOutput(rawStr)
Parses the strings from the PR650 as received after sending the command ‘d5’. The input argument “raw-
Str” can be the output from a single phosphor spectrum measurement or a list of 3 such measurements
[rawR, rawG, rawB].
sendMessage(message, timeout=0.5, DEBUG=False)
Send a command to the photometer and wait an allotted timeout for a response (Timeout should be long
for low light measurements)
class psychopy.hardware.pr.PR655(port)
An interface to the PR655/PR670 via the serial port.
example usage:
NB psychopy.hardware.findPhotometer() will locate and return any supported device for you so
you can also do:
Troubleshooting If the device isn’t responding try turning it off and turning it on again, and/or
disconnecting/reconnecting the USB cable. It may be that the port has become controlled by
some other program.
endRemoteMode()
Puts the colorimeter back into normal mode
getDeviceSN()
Return the device serial number
getDeviceType()
Return the device type (e.g. ‘PR-655’ or ‘PR-670’)
getLastColorTemp()
Fetches (from the device) the color temperature (K) of the last measurement
Returns list: status, units, exponent, correlated color temp (Kelvins), CIE 1960 deviation
See also measure() automatically populates pr655.lastColorTemp with the color temp in
Kelvins
getLastSpectrum(parse=True)
This retrieves the spectrum from the last call to measure()
If parse=True (default):
The format is a num array with 100 rows [nm, power]
otherwise:
The output will be the raw string from the PR650 and should then be passed to
parseSpectrumOutput(). It’s more efficient to parse R,G,B strings at once than each indi-
vidually.
getLastTristim()
Fetches (from the device) the last CIE 1931 Tristimulus values
Returns list: status, units, Tristimulus Values
See also measure() automatically populates pr655.lastTristim with just the tristimulus coor-
dinates
getLastUV()
Fetches (from the device) the last CIE 1976 u,v coords
Returns list: status, units, Photometric brightness, u, v
See also measure() automatically populates pr655.lastUV with [u,v]
getLastXY()
Fetches (from the device) the last CIE 1931 x,y coords
Returns list: status, units, Photometric brightness, x,y
See also measure() automatically populates pr655.lastXY with [x,y]
measure(timeOut=30.0)
Make a measurement with the device.
This automatically populates:
• .lastLum
• .lastSpectrum
• .lastCIExy
• .lastCIEuv
parseSpectrumOutput(rawStr)
Parses the strings from the PR650 as received after sending the command ‘D5’. The input argument
“rawStr” can be the output from a single phosphor spectrum measurement or a list of 3 such measurements
[rawR, rawG, rawB].
startRemoteMode()
Sets the Colorimeter into remote mode
For now the SR Research pylink module is packaged with the Standalone flavours of PsychoPy and can be imported
with:
import pylink
You do need to install the Display Software (which they also call Eyelink Developers Kit) for your particular platform.
This can be found by following the threads from:
https://www.sr-support.com/forums/forumdisplay.php?f=17
for pylink documentation see:
https://www.sr-support.com/forums/showthread.php?t=14
Performing research with eye-tracking equipment typically requires a long-term investment in software tools to collect,
process, and analyze data. Much of this involves real-time data collection, saccadic analysis, calibration routines, and
so on. The EyeLink® eye-tracking system is designed to implement most of the required software base for data
collection and conversion. It is most powerful when used with the Ethernet link interface, which allows remote control
of data collection and real-time data transfer. The PyLink toolkit includes Pylink module, which implements all core
EyeLink functions and classes for EyeLink connection and the eyelink graphics, such as the display of camera image,
calibration, validation, and drift correct. The EyeLink graphics is currently implemented using Simple Direct Media
Layer (SDL: www.libsdl.org).
The Pylink library contains a set of classes and functions, which are used to program experiments on many different
platforms, such as MS-DOS, Windows, Linux, and the Macintosh. Some programming standards, such as placement
of messages in the EDF file by your experiment, and the use of special data types, have been implemented to allow
portability of the development kit across platforms. The standard messages allow general analysis tools such as
EDF2ASC converter or EyeLink Data Viewer to process your EDF files.
pylink.alert(message)
This method is used to give a notification to the user when an error occurs. Parameters <message>: Text message
to be displayed. Return Value: None Remarks: This function does not allow printf formatting as in c. However
you can do a formatted string argument in python. This is equivalent to the C API void alert_printf(char *fmt,
. . . );
pylink.beginRealTimeMode(delay)
Sets the application priority and cleans up pending Windows activity to place the application in realtime mode.
This could take up to 100 milliseconds, depending on the operation system, to set the application priority.
Parameters <delay> an integer, used to set the minimum time this function takes, so that this function can act as
a useful delay. Return Value None This function is equivalent to the C API void begin_realtime_mode(UINT32
delay);
pylink.bitmapSave(iwidth,iheight,pixels,xs, ys, width, height,fname,path, sv_options)iwidth - original
image widthiheight - original image heightpixels - Pixels of the image in one of two
possible formats: pixel=[line1, line2, ... linen] line=[pix1,pix2,...,pixn],pix=(r,g,b).
pixel=[line1, line2, ... linen] line=[pix1,pix2,...,pixn],pix=0xAARRGGBB.xs - crop x
positionys - crop y positionwidth - crop widthheight - crop heightfname - file name to
savepath - path to savesvoptions - save options(SV_NOREPLACE,SV_MAKEPATH)
pylink.closeGraphics()
Notifies the eyelink_core_graphics to close or release the graphics. Parameters None Return Value None This is
equivalent to the C API void close_expt_graphics(void); This function should not be used with custom graphics
pylink.closeMessageFile()
DOC UNDONE
pylink.currentTime()
Returns the current millisecond time since the initialization of the EyeLink library. Parameters None. Return
Value Long integer for the current millisecond time since the initialization of the EyeLink library This function
is equivalent to the C API UINT32 current_time(void);
pylink.currentUsec()
Returns the current microsecond time since the initialization of the EyeLink library. Parameters None. Return
Value Long integer for the current microsecond time since the initialization of the EyeLink library This is
equivalent to the C API UINT32 current_usec(void);
pylink.enableExtendedRealtime()
DOC UNDONE
pylink.enablePCRSample()
If enabled, the raw data can be obtained
pylink.endRealTimeMode()
Returns the application to a priority slightly above normal, to end realtime mode. This function should ex-
ecute rapidly, but there is the possibility that Windows will allow other tasks to run after this call, causing
delays of 1-20 milliseconds. Parameters None Return Value None This function is equivalent to the C API void
end_realtime_mode(void);
pylink.flushGetkeyQueue()
Initializes the key queue used by getkey(). It may be called at any time to get rid any of old keys from the queue.
Parameters None Return Value None This is equivalent to the C API void flush_getkey_queue(void);
pylink.getDisplayInformation()
Returns the display configuration. Parameters None Return Value Instance of DisplayInfo class. The width,
height, bits, and refresh rate of the display can be accessed from the returned value. For example. display =
getDisplayInformation() print display.width, display.height, display.bits, display.refresh
pylink.getLastError()
get error number returned by last call to corresponding C-API function
pylink.inRealTimeMode()
DOC UNDONE
pylink.msecDelay(delay)
Does a unblocked delay using currentTime(). Parameters <delay>: an integer for number of milliseconds to
delay. Return Value None This is equivalent to the C API void msec_delay(UINT32 n);
pylink.openCustomGraphicsInternal()
DOC UNDONE
pylink.openGraphics()
openGraphics(dimension, bits); Opens the graphics if the display mode is not set. If the display mode is already
set, uses the existing display mode. Parameters <dimension>: two-item tuple of display containing width and
height information. <bits>: color bits. Return Value None or run-time error. This is equivalent to the SDL
version C API INT16 init_expt_graphics(SDL_Surface * s, DISPLAYINFO *info).
pylink.openMessageFile()
DOC UNDONE
pylink.pumpDelay(delay)
During calls to msecDelay(), Windows is not able to handle messages. One result of this is that windows may
not appear. This is the preferred delay function when accurate timing is not needed. It calls pumpMessages()
until the last 20 milliseconds of the delay, allowing Windows to function properly. In rare cases, the delay may
be longer than expected. It does not process modeless dialog box messages. Parameters <delay>: an integer,
which sets number of milliseconds to delay. Return Value None Use the This is equivalent to the C API void
pump_delay(UINT32 delay);
pylink.resetBackground()
DOC UNDONE
pylink.sendMessageToFile()
DOC UNDONE
pylink.setCalibrationColors(foreground_color, background_color)
Passes the colors of the display background and fixation target to the eyelink_core_graphics library. During
calibration, camera image display, and drift correction, the display background should match the brightness of
the experimental stimuli as closely as possible, in order to maximize tracking accuracy. This function passes
the colors of the display background and fixation target to the eyelink_core_graphics library. This also prevents
flickering of the display at the beginning and end of drift correction. Parameters <foreground_color>: color
for foreground calibration target. <background_color>: color for foreground calibration background. Both
colors must be a threeinteger (from 0 to 255) tuple encoding the red, blue, and green color component. Return
Value None This is equivalent to the C API void set_calibration_colors(SDL_Color *fg, SDL_Color *bg);
Example: setCalibrationColors((0, 0, 0), (255, 255, 255)) This sets the calibration target in black and calibration
background in white.
pylink.setCalibrationSounds(target, good, error)
Selects the sounds to be played during do_tracker_setup(), including calibration, validation and drift correction.
These events are the display or movement of the target, successful conclusion of calibration or good validation,
and failure or interruption of calibration or validation. Note: If no sound card is installed, the sounds are
produced as beeps from the PC speaker. Otherwise, sounds can be selected by passing a string. If the string is
(empty), the default sounds are played. If the string is off, no sound will be played for that event. Otherwise, the
string should be the name of a .WAV file to play. Parameters <target>: Sets sound to play when target moves;
<good>: Sets sound to play on successful operation; <error>: Sets sound to play on failure or interruption.
Return Value None This function is equivalent to the C API void set_cal_sounds(char *target, char *good, char
*error);
pylink.setCameraPosition(left, top, right, bottom)
Sets the camera position on the display computer. Moves the top left hand corner of the camera position to
new location. Parameters <left>: x-coord of upper-left corner of the camera image window; <top>: y-coord
of upper-left corner of the camera image window; <right>: x-coord of lower-right corner of the camera image
window; <bottom>: y-coord of lower-right corner of the camera image window. Return Value None
Please specify the name of the pump configuration to use in the PsychoPy preferences under Hardware / Qmix
pump configuration. See the readme file of the pyqmix project for details on how to set up your computer and
create the configuration file.
psychopy.hardware.findPhotometer(ports=None, device=None)
Try to find a connected photometer/photospectrometer!
PsychoPy will sweep a series of serial ports trying to open them. If a port successfully opens then it will try
to issue a command to the device. If it responds with one of the expected values then it is assumed to be the
appropriate device.
Parameters
ports [a list of ports to search] Each port can be a string (e.g. ‘COM1’, ‘’/dev/tty.Keyspan1.1’)
or a number (for win32 comports only). If none are provided then PsychoPy will sweep
COM0-10 on win32 and search known likely port names on macOS and Linux.
device [string giving expected device (e.g. ‘PR650’, ‘PR655’,] ‘LS100’, ‘LS110’). If this is not
given then an attempt will be made to find a device of any type, but this often fails
Returns
• An object representing the first photometer found
• None if the ports didn’t yield a valid response
• None if there were not even any valid ports (suggesting a driver not being installed)
e.g.:
This module has tools for fetching data about the system or the current Python process. Such info can be useful for
understanding the context in which an experiment was run.
class psychopy.info.RunTimeInfo(author=None, version=None, win=None, refreshT-
est=’grating’, userProcsDetailed=False, verbose=False)
Returns a snapshot of your configuration at run-time, for immediate or archival use.
Returns a dict-like object with info about PsychoPy, your experiment script, the system & OS, your window and
monitor settings (if any), python & packages, and openGL.
If you want to skip testing the refresh rate, use ‘refreshTest=None’
Example usage: see runtimeInfo.py in coder demos.
Author
• 2010 written by Jeremy Gray, input from Jon Peirce and Alex Holcombe
Parameters
win [None, False, Window instance] what window to use for refresh rate testing (if any) and
settings. None -> temporary window using defaults; False -> no window created, used, nor
profiled; a Window() instance you have already created
author [None, string] None = try to autodetect first __author__ in sys.argv[0]; string = user-
supplied author info (of an experiment)
version [None, string] None = try to autodetect first __version__ in sys.argv[0]; string = user-
supplied version info (of an experiment)
verbose : False, True; how much detail to assess
refreshTest [None, False, True, ‘grating’] True or ‘grating’ = assess refresh average, median,
and SD of 60 win.flip()s, using visual.getMsPerFrame() ‘grating’ = show a visual during the
assessment; True = assess without a visual
userProcsDetailed: False, True get details about concurrent user’s processes (command,
process-ID)
Returns a flat dict (but with several groups based on key names):
psychopy [version, rush() availability] psychopyVersion, psychopyHaveExtRush, git branch
and current commit hash if available
experiment [author, version, directory, name, current time-stamp,] SHA1 digest, VCS info (if
any, svn or hg only), experimentAuthor, experimentVersion, . . .
system [hostname, platform, user login, count of users,] user process info (count, cmd + pid),
flagged processes systemHostname, systemPlatform, . . .
window [(see output; many details about the refresh rate, window,] and monitor; units are
noted) windowWinType, windowWaitBlanking, . . . windowRefreshTimeSD_ms, . . . win-
dowMonitor.<details>, . . .
python [version of python, versions of key packages] (wx, numpy, scipy, matplotlib, pyglet,
pygame) pythonVersion, pythonScipyVersion, . . .
openGL [version, vendor, rendering engine, plus info on whether] several extensions are
present openGLVersion, . . . , openGLextGL_EXT_framebuffer_object, . . .
_setCurrentProcessInfo(verbose=False, userProcsDetailed=False)
What other processes are currently active for this user?
_setExperimentInfo(author, version, verbose)
Auto-detect __author__ and __version__ in sys.argv[0] (= the # users’s script)
_setPythonInfo()
External python packages, python details
_setSystemInfo()
System info
_setWindowInfo(win, verbose=False, refreshTest=’grating’, usingTempWin=True)
Find and store info about the window: refresh rate, configuration info.
psychopy.info._getHgVersion(filename)
Tries to discover the mercurial (hg) parent and id of a file.
Not thoroughly tested; untested on Windows Vista, Win 7, FreeBSD
Author
• 2010 written by Jeremy Gray
psychopy.info._getSha1hexDigest(thing, isfile=False)
Returns base64 / hex encoded sha1 digest of str(thing), or of a file contents. Return None if a file is requested
but no such file exists
Author
• 2010 Jeremy Gray; updated 2011 to be more explicit,
• 2012 to remove sha.new()
>>> _getSha1hexDigest('1')
'356a192b7913b04c54574d18c28d46e6395428ab'
>>> _getSha1hexDigest(1)
'356a192b7913b04c54574d18c28d46e6395428ab'
psychopy.info._getSvnVersion(filename)
Tries to discover the svn version (revision #) for a file.
Not thoroughly tested; untested on Windows Vista, Win 7, FreeBSD
Author
• 2010 written by Jeremy Gray
psychopy.info._getUserNameUID()
Return user name, UID.
UID values can be used to infer admin-level: -1=undefined, 0=full admin/root, >499=assume non-admin/root
(>999 on debian-based)
Author
• 2010 written by Jeremy Gray
psychopy.info.getMemoryUsage()
Get the memory (RAM) currently used by this Python process, in M.
psychopy.info.getRAM()
Return system’s physical RAM & available RAM, in M.
8.10. psychopy.info - functions for getting information about the system 285
PsychoPy - Psychology software for Python, Release 3.2.0
ioHub monitors for device events in parallel with the PsychoPy experiment execution by running in a separate process
than the main PsychoPy script. This means, for instance, that keyboard and mouse event timing is not quantized by
the rate at which the window.flip() method is called.
ioHub reports device events to the PsychoPy experiment runtime as they occur. Optionally, events can be saved to a
HDF5 file.
All iohub events are timestamped using the PsychoPy global time base (psychopy.core.getTime()). Events can be
accessed as a device independent event stream, or from a specific device of interest.
A comprehensive set of examples that each use at least one of the iohub devices is available in the psy-
chopy/demos/coder/iohub folder.
Note: This documentation is in very early stages of being written. Comments and contributions are welcome.
To use ioHub within your PsychoPy Coder experiment script, ioHub needs to be started at the start of the experiment
script. The easiest way to do this is by calling the launchHubServer function.
launchHubServer function
psychopy.iohub.client.launchHubServer(**kwargs)
Starts the ioHub Server subprocess, and return a psychopy.iohub.client.ioHubConnection object
that is used to access enabled iohub device’s events, get events, and control the ioHub process during the exper-
iment.
By default (no kwargs specified), the ioHub server does not create an ioHub HDF5 file, events are available to
the experiment program at runtime. The following Devices are enabled by default:
• Keyboard: named ‘keyboard’, with runtime event reporting enabled.
• Mouse: named ‘mouse’, with runtime event reporting enabled.
• Monitor: named ‘monitor’.
• Experiment: named ‘experiment’.
To customize how the ioHub Server is initialized when started, use one or more of the following keyword
arguments when calling the function:
Examples
# Start the ioHub process. 'io' can now be used during the
# experiment to access iohub devices and read iohub device events.
io=launchHubServer()
Please see the psychopy/demos/coder/iohub/launchHub.py demo for examples of different ways to use the
launchHubServer function.
ioHubConnection Class
The psychopy.iohub.ioHubConnection object returned from the launchHubServer function provides methods for con-
trolling the iohub process and accessing iohub devices and events.
class psychopy.iohub.client.ioHubConnection(object)
ioHubConnection is responsible for creating, sending requests to, and reading replies from the ioHub Process.
This class is also used to shut down and disconnect the ioHub Server process.
The ioHubConnection class is also used as the interface to any ioHub Device instances that have been created
so that events from the device can be monitored. These device objects can be accessed via the ioHubConnection
.devices attribute, providing ‘dot name’ access to enabled devices. Alternatively, the .getDevice(name) method
can be used and will return None if the device name specified does not exist.
Using the .devices attribute is handy if you know the name of the device to be accessed and you are sure it is
actually enabled on the ioHub Process.
An example of accessing a device using the .devices attribute:
getDevice(deviceName)
Returns the ioHubDeviceView that has a matching name (based on the device : name property specified in
the ioHub_config.yaml for the experiment). If no device with the given name is found, None is returned.
Example, accessing a Keyboard device that was named ‘kb’
keyboard = self.getDevice('kb')
kb_events= keyboard.getEvent()
This is the same as using the ‘natural naming’ approach supported by the .devices attribute, i.e:
keyboard = self.devices.kb
kb_events= keyboard.getEvent()
However the advantage of using getDevice(device_name) is that an exception is not created if you provide
an invalid device name, or if the device is not enabled on the ioHub server; None is returned instead.
Parameters deviceName (str) – Name given to the ioHub Device to be returned
Returns The ioHubDeviceView instance for deviceName.
getEvents(device_label=None, as_type=’namedtuple’)
Retrieve any events that have been collected by the ioHub Process from monitored devices since the last
call to getEvents() or clearEvents().
By default all events for all monitored devices are returned, with each event being represented as a named-
tuple of all event attributes.
When events are retrieved from an event buffer, they are removed from that buffer as well.
If events are only needed from one device instead of all devices, providing a valid device name as the
device_label argument will result in only events from that device being returned.
Events can be received in one of several object types by providing the optional as_type property to the
method. Valid values for as_type are the following str values:
• ‘list’: Each event is a list of ordered attributes.
• ‘namedtuple’: Each event is converted to a namedtuple object.
• ‘dict’: Each event converted to a dict object.
• ‘object’: Each event is converted to a DeviceEvent subclass based on the event’s type.
Parameters
• device_label (str) – Name of device to retrieve events for. If None ( the default )
returns device events from all devices.
• as_type (str) – Returned event object type. Default: ‘namedtuple’.
Returns List of event objects; object type controlled by ‘as_type’.
Return type tuple
clearEvents(device_label=’all’)
Clears unread events from the ioHub Server’s Event Buffer(s) so that unneeded events are not discarded.
If device_label is ‘all’, ( the default ), then events from both the ioHub Global Event Buffer and all Device
Event Buffer’s are cleared.
If device_label is None then all events in the ioHub Global Event Buffer are cleared, but the Device Event
Buffers are unaffected.
If device_label is a str giving a valid device name, then that Device Event Buffer is cleared, but the Global
Event Buffer is not affected.
Parameters device_label (str) – device name, ‘all’, or None
Returns None
sendMessageEvent(text, category=”, offset=0.0, sec_time=None)
Create and send an Experiment MessageEvent to the ioHub Server for storage in the ioDataStore hdf5 file.
Note: MessageEvents can be thought of as DeviceEvents from the virtual PsychoPy Process “Device”.
Parameters
• text (str) – The text message for the message event. 128 char max.
• category (str) – A str grouping code for the message. Optional. 32 char max.
• offset (float) – Optional sec.msec offset applied to the message event time stamp.
Default 0.
• sec_time (float) – Absolute sec.msec time stamp for the message in. If not provided,
or None, then the MessageEvent is time stamped when this method is called using the
global timer (core.getTime()).
Returns True
Return type bool
createTrialHandlerRecordTable(trials, cv_order=None)
Create a condition variable table in the ioHub data file based on the a psychopy TrialHandler. By doing
so, the iohub data file can contain the DV and IV values used for each trial of an experiment session, along
with all the iohub device events recorded by iohub during the session.
Example psychopy code usage:
exp_conditions=importConditions('trial_conditions.xlsx')
trials = TrialHandler(exp_conditions, 1)
addTrialHandlerRecord(cv_row)
Adds the values from a TriaHandler row / record to the iohub data file for future data analysis use.
Parameters cv_row –
Returns None
getTime()
Deprecated Method: Use Computer.getTime instead. Remains here for testing time bases between pro-
cesses only.
setPriority(level=’normal’, disable_gc=False)
See Computer.setPriority documentation, where current process will be the iohub process.
getPriority()
See Computer.getPriority documentation, where current process will be the iohub process.
getProcessAffinity()
Returns the current ioHub Process affinity setting, as a list of ‘processor’ id’s (from 0 to
getSystemProcessorCount()-1). A Process’s Affinity determines which CPU’s or CPU cores a process
can run on. By default the ioHub Process can run on any CPU or CPU core.
This method is not supported on OS X at this time.
Parameters None –
Returns
A list of integer values between 0 and Computer.getSystemProcessorCount()-1, where
values in the list indicate processing unit indexes that the ioHub process is able to run on.
Return type list
setProcessAffinity(processor_list)
Sets the ioHub Process Affinity based on the value of processor_list.
A Process’s Affinity determines which CPU’s or CPU cores a process can run on. By default the ioHub
Process can run on any CPU or CPU core.
The processor_list argument must be a list of ‘processor’ id’s; integers in the range of 0 to
Computer.processing_unit_count-1, representing the processing unit indexes that the ioHub Server should
be allowed to run on.
If processor_list is given as an empty list, the ioHub Process will be able to run on any processing unit on
the computer.
This method is not supported on OS X at this time.
Parameters processor_list (list) – A list of integer values between 0 and
Computer.processing_unit_count-1, where values in the list indicate processing unit indexes
that the ioHub process is able to run on.
Returns None
flushDataStoreFile()
Manually tell the ioDataStore to flush any events it has buffered in memory to disk.”.
Parameters None –
Returns None
startCustomTasklet(task_name, task_class_path, **class_kwargs)
Instruct the iohub server to start running a custom tasklet given by task_class_path. It is important that the
custom task does not block for any significant amount of time, or the processing of events by the iohub
server will be negatively effected.
See the customtask.py demo for an example of how to make a long running task not block the rest of the
iohub server.
stopCustomTasklet(task_name)
Instruct the iohub server to stop the custom task that was previously started by calling
self.startCustomTasklet(. . . .). task_name identifies which custom task should be stopped and must match
the task_name of a previously started custom task.
shutdown()
Tells the ioHub Server to close all ioHub Devices, the ioDataStore, and the connection monitor between
the PsychoPy and ioHub Processes. Then end the server process itself.
Parameters None –
Returns None
quit()
Same as the shutdown() method, but has same name as PsychoPy core.quit() so maybe easier to remember.
psychopy.iohub supports several different types of devices, including Keyboards, Mice, and Eye Trackers.
Details for each device can be found in the following sections.
Keyboard Device
Examples
# Start the ioHub process. 'io' can now be used during the
# experiment to access iohub devices and read iohub device events.
io = launchHubServer()
keyboard = io.devices.keyboard
# Check for and print any Keyboard events received for 5 seconds.
stime = getTime()
while getTime()-stime < 5.0:
for e in keyboard.getEvents():
print(e)
# Start the ioHub process. 'io' can now be used during the
# experiment to access iohub devices and read iohub device events.
io = launchHubServer()
keyboard = io.devices.keyboard
print(presses)
state
Returns all currently pressed keys as a dictionary of key – time values. The key is taken from the originating
press event .key field. The time value is time of the key press event.
Note that any pressed, or active, modifier keys are included in the return value.
Returns dict
waitForKeys(maxWait=None, keys=None, chars=None, mods=None, duration=None, etype=None,
clear=True, checkInterval=0.002)
Blocks experiment execution until at least one matching KeyboardEvent occurs, or until maxWait seconds
has passed since the method was called.
Keyboard events are filtered the same way as in the getKeys() method.
As soon as at least one matching KeyboardEvent occurs prior to maxWait, the matching events are returned
as a tuple.
Returned events are sorted by time.
Parameters
• maxWait – Maximum seconds method waits for >=1 matching event. If <=0.0, method
functions the same as getKeys(). If None, the methods blocks indefinitely.
• keys – Include events where .key in keys.
• chars – Include events where .char in chars.
• mods – Include events where .modifiers include >=1 mods element.
• duration – Include KeyboardRelease events where .duration > duration or .duration <
-(duration).
• etype – Include events that match etype of Keyboard.KEY_PRESS or Key-
board.KEY_RELEASE.
• clear – True (default) = clear returned events from event buffer, False = leave the key-
board event buffer unchanged.
• checkInterval – The time between geyKeys() calls while waiting. The method sleeps
between geyKeys() calls, up until checkInterval*2.0 sec prior to the maxWait. After that
time, keyboard events are constantly checked until the method times out.
Returns tuple of KeyboardEvent instances, or ()
waitForPresses(maxWait=None, keys=None, chars=None, mods=None, duration=None,
clear=True, checkInterval=0.002)
See the waitForKeys() method documentation.
This method is identical, but only returns KeyboardPress events.
waitForReleases(maxWait=None, keys=None, chars=None, mods=None, duration=None,
clear=True, checkInterval=0.002)
See the waitForKeys() method documentation.
This method is identical, but only returns KeyboardRelease events.
Keyboard Events
The Keyboard device can return two types of events, which represent key press and key release actions on the keyboard.
KeyboardPress Event
class psychopy.iohub.client.keyboard.KeyboardPress(ioe_array)
An iohub Keyboard device key press event.
char
The unicode value of the keyboard event, if available. This field is only populated when the keyboard event
results in a character that could be printable.
Returns unicode, ‘’ if no char value is available for the event.
device
The ioHubDeviceView that is associated with the event, i.e. the iohub device view for the device that
generated the event.
Returns ioHubDeviceView
modifiers
A list of any modifier keys that were pressed when this keyboard event occurred. Each element of the list
contains a keyboard modifier string constant. Possible values are:
• ‘lctrl’, ‘rctrl’
• ‘lshift’, ‘rshift’
• ‘lalt’, ‘ralt’ (labelled as ‘option’ keys on Apple Keyboards)
• ‘lcmd’, ‘rcmd’ (map to the ‘windows’ key(s) on Windows keyboards)
• ‘menu’
• ‘capslock’
• ‘numlock’
• ‘function’ (OS X only)
• ‘modhelp’ (OS X only)
If no modifiers were active when the event occurred, an empty list is returned.
Returns tuple
time
The time stamp of the event. Uses the same time base that is used by psychopy.core.getTime()
Returns float
type
The event type string constant.
Returns str
KeyboardRelease Event
class psychopy.iohub.client.keyboard.KeyboardRelease(ioe_array)
An iohub Keyboard device key release event.
duration
The duration (in seconds) of the key press. This is calculated by subtracting the current event.time from
the associated keypress.time.
If no matching keypress event was reported prior to this event, then 0.0 is returned. This can happen, for
example, when the key was pressed prior to psychopy starting to monitor the device. This condition can
also happen when keyboard.reset() method is called between the press and release event times.
Returns float
pressEventID
The event.id of the associated press event.
The key press id is 0 if no associated KeyboardPress event was found. See the duration property documen-
tation for details on when this can occur.
Returns unsigned int
char
The unicode value of the keyboard event, if available. This field is only populated when the keyboard event
results in a character that could be printable.
Returns unicode, ‘’ if no char value is available for the event.
device
The ioHubDeviceView that is associated with the event, i.e. the iohub device view for the device that
generated the event.
Returns ioHubDeviceView
modifiers
A list of any modifier keys that were pressed when this keyboard event occurred. Each element of the list
contains a keyboard modifier string constant. Possible values are:
• ‘lctrl’, ‘rctrl’
• ‘lshift’, ‘rshift’
• ‘lalt’, ‘ralt’ (labelled as ‘option’ keys on Apple Keyboards)
• ‘lcmd’, ‘rcmd’ (map to the ‘windows’ key(s) on Windows keyboards)
• ‘menu’
• ‘capslock’
• ‘numlock’
• ‘function’ (OS X only)
• ‘modhelp’ (OS X only)
If no modifiers were active when the event occurred, an empty list is returned.
Returns tuple
time
The time stamp of the event. Uses the same time base that is used by psychopy.core.getTime()
Returns float
type
The event type string constant.
Returns str
The Mouse device supports the following event types. Device events returned by getEvents() are automatically con-
verted to either namedtuple or dictionary objects with the same attributes / keys as the associated event class attributes.
The iohub commmon eye tracker interface provides a consistent way to configure and collected data from several
different eye tracker manufacturers, including GazePoint, SR Research, and Tobii.
Gazepoint
Platforms:
• Windows 7 / 10 only
Required Python Version:
• Python 3.6 +
Supported Models:
• Gazepoint GP3
To use your Gazepoint GP3 during an experiment you must first start the Gazepoint Control software on the computer
running PsychoPy.
EyeTracker Class
class psychopy.iohub.devices.eyetracker.hw.gazepoint.gp3.EyeTracker
To start iohub with a Gazepoint GP3 eye tracker device, add a GP3 device to the device dictionary passed to
launchHubServer or the experiment’s iohub_config.yaml:
eyetracker.hw.gazepoint.gp3.EyeTracker
Note: The Gazepoint control application must be running while using this interface.
Examples
1. Start ioHub with Gazepoint GP3 device and run tracker calibration:
iohub_config = {'eyetracker.hw.gazepoint.gp3.EyeTracker':
{'name': 'tracker', 'device_timer': {'interval': 0.005}}}
io = launchHubServer(**iohub_config)
stime = getTime()
while getTime()-stime < 2.0:
for e in tracker.getEvents():
print(e)
# Check for and print current eye position every 100 msec.
stime = getTime()
while getTime()-stime < 5.0:
print(tracker.getPosition())
wait(0.1)
tracker.setRecordingState(False)
Changing any values in the returned dictionary has no effect on the device state.
Parameters None –
Returns The dictionary of the device configuration settings used to create the device.
Return type (dict)
getEvents(*args, **kwargs)
Retrieve any DeviceEvents that have occurred since the last call to the device’s getEvents() or clearEvents()
methods.
Note that calling getEvents() at a device level does not change the Global Event Buffer’s contents.
Parameters
• event_type_id (int) – If specified, provides the ioHub DeviceEvent ID for which
events should be returned for. Events that have occurred but do not match the event ID
specified are ignored. Event type ID’s can be accessed via the EventConstants class; all
available event types are class attributes of EventConstants.
• clearEvents (int) – Can be used to indicate if the events being returned should also
be removed from the device event buffer. True (the default) indicates to remove events
being returned. False results in events being left in the device event buffer.
• asType (str) – Optional kwarg giving the object type to return events as. Valid values
are ‘namedtuple’ (the default), ‘dict’, ‘list’, or ‘object’.
Returns New events that the ioHub has received since the last getEvents() or clearEvents() call
to the device. Events are ordered by the ioHub time of each event, older event at index 0. The
event object type is determined by the asType parameter passed to the method. By default a
namedtuple object is returned for each event.
Return type (list)
getLastGazePosition()
The getLastGazePosition method returns the most recent eye gaze position received from the Eye Tracker.
This is the position on the calibrated 2D surface that the eye tracker is reporting as the current eye position.
The units are in the units in use by the ioHub Display device.
If binocular recording is being performed, the average position of both eyes is returned.
If no samples have been received from the eye tracker, or the eye tracker is not currently recording data,
None is returned.
Parameters None –
Returns
If this method is not supported by the eye tracker interface, EyeTrackerCon-
stants.EYETRACKER_INTERFACE_METHOD_NOT_SUPPORTED is returned.
None: If the eye tracker is not currently recording data or no eye samples have been received.
tuple: Latest (gaze_x,gaze_y) position of the eye(s)
Return type int
getLastSample()
The getLastSample method returns the most recent eye sample received from the Eye Tracker. The Eye
Tracker must be in a recording state for a sample event to be returned, otherwise None is returned.
Parameters None –
Returns
If this method is not supported by the eye tracker interface, EyeTrackerCon-
stants.FUNCTIONALITY_NOT_SUPPORTED is returned.
None: If the eye tracker is not currently recording data.
EyeSample: If the eye tracker is recording in a monocular tracking mode, the latest sample
event of this event type is returned.
BinocularEyeSample: If the eye tracker is recording in a binocular tracking mode, the latest
sample event of this event type is returned.
Return type int
getPosition()
The getPosition method is the same as the getLastGazePosition method, provided as a consistent cross
device method to access the current screen position reported by a device.
See getLastGazePosition for further details.
isRecordingEnabled()
isRecordingEnabled returns the recording state from the eye tracking device.
Parameters None –
Returns True == the device is recording data; False == Recording is not occurring
Return type bool
runSetupProcedure()
runSetupProcedure opens the GP3 Calibration window.
setRecordingState(recording)
setRecordingState is used to start or stop the recording of data from the eye tracking device.
Parameters recording (bool) – if True, the eye tracker will start recordng available eye
data and sending it to the experiment program if data streaming was enabled for the device.
If recording == False, then the eye tracker stops recording eye data and streaming it to the
experiment.
If the eye tracker is already recording, and setRecordingState(True) is called, the eye tracker will simple
continue recording and the method call is a no-op. Likewise if the system has already stopped recording
and setRecordingState(False) is called again.
Parameters recording (bool) – if True, the eye tracker will start recordng data.; false =
stop recording data.
Return:trackerTime bool: the current recording state of the eye tracking device
trackerSec()
Same as the GP3 implementation of trackerTime().
trackerTime()
Current eye tracker time in the eye tracker’s native time base. The GP3 system uses a sec.usec timebase
based on the Windows QPC.
Parameters None –
Returns current native eye tracker time in sec.msec format.
Return type float
The Gazepoint GP3 provides real-time access to binocular sample data. iohub creates a BinocularEyeSampleEvent
for each sample received from the GP3.
The following fields of the BinocularEyeSample event are supported:
class psychopy.iohub.devices.eyetracker.BinocularEyeSampleEvent(object)
The BinocularEyeSampleEvent event represents the eye position and eye attribute data collected from one frame
or reading of an eye tracker device that is recording both eyes of a participant.
Event Type ID: EventConstants.BINOCULAR_EYE_SAMPLE
Event Type String: ‘BINOCULAR_EYE_SAMPLE’
time
time of event, in sec.msec format, using psychopy timebase.
left_gaze_x
The horizontal position of the left eye on the computer screen, in Display Coordinate Type Units. Calibra-
tion must be done prior to reading (meaningful) gaze data. Uses Gazepoint LPOGX field.
left_gaze_y
The vertical position of the left eye on the computer screen, in Display Coordinate Type Units. Calibration
must be done prior to reading (meaningful) gaze data. Uses Gazepoint LPOGY field.
left_raw_x
The uncalibrated x position of the left eye in a device specific coordinate space. Uses Gazepoint LPCX
field.
left_raw_y
The uncalibrated y position of the left eye in a device specific coordinate space. Uses Gazepoint LPCY
field.
left_pupil_measure_1
Left eye pupil diameter. (in camera pixels??). Uses Gazepoint LPD field.
right_gaze_x
The horizontal position of the right eye on the computer screen, in Display Coordinate Type Units. Cali-
bration must be done prior to reading (meaningful) gaze data. Uses Gazepoint RPOGX field.
right_gaze_y
The vertical position of the right eye on the computer screen, in Display Coordinate Type Units. Calibration
must be done prior to reading (meaningful) gaze data. Uses Gazepoint RPOGY field.
right_raw_x
The uncalibrated x position of the right eye in a device specific coordinate space. Uses Gazepoint RPCX
field.
right_raw_y
The uncalibrated y position of the right eye in a device specific coordinate space. Uses Gazepoint RPCY
field.
right_pupil_measure_1
Right eye pupil diameter. (in camera pixels??). Uses Gazepoint RPD field.
status
Indicates if eye sample contains ‘valid’ data for left and right eyes. 0 = Eye sample is OK. 2 = Right eye
data is likely invalid. 20 = Left eye data is likely invalid. 22 = Eye sample is likely invalid.
iohub also creates basic start and end fixation events by using Gazepoint FPOG* fields. Identical / duplicate fixation
events are created for the left and right eye.
class psychopy.iohub.devices.eyetracker.FixationStartEvent(object)
A FixationStartEvent is generated when the beginning of an eye fixation ( in very general terms, a period of
relatively stable eye position ) is detected by the eye trackers sample parsing algorithms.
Event Type ID: EventConstants.FIXATION_START
Event Type String: ‘FIXATION_START’
time
time of event, in sec.msec format, using psychopy timebase.
eye
Eye that generated the event. Either EyeTrackerConstants.LEFT_EYE or EyeTrackerCon-
stants.RIGHT_EYE.
gaze_x
The calibrated horizontal eye position on the computer screen at the start of the fixation. Units are same as
Display. Calibration must be done prior to reading (meaningful) gaze data. Uses Gazepoint FPOGX field.
gaze_y
The calibrated horizontal eye position on the computer screen at the start of the fixation. Units are same as
Display. Calibration must be done prior to reading (meaningful) gaze data. Uses Gazepoint FPOGY field.
class psychopy.iohub.devices.eyetracker.FixationEndEvent(object)
A FixationEndEvent is generated when the end of an eye fixation ( in very general terms, a period of relatively
stable eye position ) is detected by the eye trackers sample parsing algorithms.
Event Type ID: EventConstants.FIXATION_END
Event Type String: ‘FIXATION_END’
time
time of event, in sec.msec format, using psychopy timebase.
eye
Eye that generated the event. Either EyeTrackerConstants.LEFT_EYE or EyeTrackerCon-
stants.RIGHT_EYE.
average_gaze_x
Average calibrated horizontal eye position during the fixation, specified in Display Units. Uses Gazepoint
FPOGX field.
average_gaze_y
Average calibrated vertical eye position during the fixation, specified in Display Units. Uses Gazepoint
FPOGY field.
duration
Duration of the fixation in sec.msec format. Uses Gazepoint FPOGD field.
eyetracker.hw.gazepoint.gp3.EyeTracker:
# Indicates if the device should actually be loaded at experiment runtime.
enable: True
# The variable name of the device that will be used to access the ioHub Device
˓→ class
# during experiment run-time, via the devices.[name] attribute of the ioHub
# connection or experiment runtime class.
name: tracker
(continues on next page)
# Should eye tracker events be saved to the ioHub DataStore file when the device
# is recording data ?
save_events: True
# Should eye tracker events be sent to the Experiment process when the device
# is recording data ?
stream_events: True
# How many eye events (including samples) should be saved in the ioHub event
˓→buffer before
# old eye events start being replaced by new events. When the event buffer reaches
# the maximum event length of the buffer defined here, older events will start to
˓→be dropped.
event_buffer_length: 1024
# The GP3 implementation of the common eye tracker interface supports the
# BinocularEyeSampleEvent event type.
monitor_event_types: [ BinocularEyeSampleEvent, FixationStartEvent,
˓→FixationEndEvent]
device_timer:
interval: 0.005
calibration:
# target_duration is the number of sec.msec that a calibration point should
# be displayed before moving onto the next point.
# (Sets the GP3 CALIBRATE_TIMEOUT)
target_duration: 1.25
# target_delay specifies the target animation duration in sec.msec.
# (Sets the GP3 CALIBRATE_DELAY)
target_delay: 0.5
# manufacturer_name is used to store the name of the maker of the eye tracking
# device. This is for informational purposes only.
manufacturer_name: GazePoint
SR Research
Platforms:
• Windows 7 / 10
• Linux (not tested)
• macOS (not tested)
Required Python Version:
• Python 3.6 +
Supported Models:
• EyeLink 1000
• EyeLink 1000 Remote (not tested)
• EyeLink 1000 Plus (not tested)
The SR Research EyeLink implementation of the ioHub common eye tracker interface uses the pylink package written
by SR Research. If using a PsychoPy3 standalone installation, this package should already be included.
If you are manually installing PsychPy3, please install the appropriate version of pylink. Downloads are available to
SR Research customers from their support website.
EyeTracker Class
class psychopy.iohub.devices.eyetracker.hw.sr_research.eyelink.EyeTracker
The SR Research EyeLink implementation of the Common Eye Tracker Interface can be used by providing the
following EyeTracker path as the device class in the iohub_config.yaml device settings file:
eyetracker.hw.sr_research.eyelink
Examples
1. Start ioHub with SR Research EyeLink 1000 and run tracker calibration:
from psychopy.iohub import launchHubServer
from psychopy.core import getTime, wait
iohub_config = {'eyetracker.hw.sr_research.eyelink.EyeTracker':
{'name': 'tracker',
'model_name': 'EYELINK 1000 DESKTOP',
'runtime_settings': {'sampling_rate': 500,
'track_eyes': 'RIGHT'}
}
}
io = launchHubServer(**iohub_config)
stime = getTime()
while getTime()-stime < 2.0:
for e in tracker.getEvents():
print(e)
# Check for and print current eye position every 100 msec.
stime = getTime()
while getTime()-stime < 5.0:
print(tracker.getPosition())
wait(0.1)
tracker.setRecordingState(False)
Returns New events that the ioHub has received since the last getEvents() or clearEvents() call
to the device. Events are ordered by the ioHub time of each event, older event at index 0. The
event object type is determined by the asType parameter passed to the method. By default a
namedtuple object is returned for each event.
Return type (list)
getLastGazePosition()
getLastGazePosition returns the most recent x,y eye position, in Display device coordinate space, received
by the ioHub server from the EyeLink device. In the case of binocular recording, and if both eyes are
successfully being tracked, then the average of the two eye positions is returned. If the eye tracker is not
recording or is not connected, then None is returned. The getLastGazePosition method returns the most
recent eye gaze position retieved from the eye tracker device. This is the position on the calibrated 2D
surface that the eye tracker is reporting as the current eye position. The units are in the units in use by the
Display device.
If binocular recording is being performed, the average position of both eyes is returned.
If no samples have been received from the eye tracker, or the eye tracker is not currently recording data,
None is returned.
Parameters None –
Returns
If the eye tracker is not currently recording data or no eye samples have been received.
tuple: Latest (gaze_x,gaze_y) position of the eye(s)
Return type None
getLastSample()
getLastSample returns the most recent EyeSampleEvent received from the EyeLink system. Any position
fields are in Display device coordinate space. If the eye tracker is not recording or is not connected, then
None is returned.
Parameters None –
Returns
If the eye tracker is not currently recording data.
EyeSample: If the eye tracker is recording in a monocular tracking mode, the latest sample
event of this event type is returned.
BinocularEyeSample: If the eye tracker is recording in a binocular tracking mode, the latest
sample event of this event type is returned.
Return type None
getPosition()
The getPosition method is the same as the getLastGazePosition method, provided as a consistent cross
device method to access the current screen position reported by a device.
See getLastGazePosition for further details.
isRecordingEnabled()
isRecordingEnabled returns True if the eye tracking device is currently connected and sending eye event
data to the ioHub server. If the eye tracker is not recording, or is not connected to the ioHub server, False
will be returned.
Parameters None –
Returns True == the device is recording data; False == Recording is not occurring
The EyeLink implementation of the ioHub eye tracker interface supports monoculor or binocular eye samples as well
as fixation, saccade, and blink events.
Eye Samples
class psychopy.iohub.devices.eyetracker.MonocularEyeSampleEvent(object)
A MonocularEyeSampleEvent represents the eye position and eye attribute data collected from one frame or
reading of an eye tracker device that is recoding from only one eye, or is recording from both eyes and averaging
the binocular data.
Event Type ID: EventConstants.MONOCULAR_EYE_SAMPLE
Event Type String: ‘MONOCULAR_EYE_SAMPLE’
time
time of event, in sec.msec format, using psychopy timebase.
eye
Eye that generated the sample. Either EyeTrackerConstants.LEFT_EYE or EyeTrackerCon-
stants.RIGHT_EYE.
gaze_x
The horizontal position of the eye on the computer screen, in Display Coordinate Type Units. Calibration
must be done prior to reading (meaningful) gaze data.
gaze_y
The vertical position of the eye on the computer screen, in Display Coordinate Type Units. Calibration
must be done prior to reading (meaningful) gaze data.
angle_x
Horizontal eye angle.
angle_y
Vertical eye angle.
raw_x
The uncalibrated x position of the eye in a device specific coordinate space.
raw_y
The uncalibrated y position of the eye in a device specific coordinate space.
pupil_measure_1
Pupil size. Use pupil_measure1_type to determine what type of pupil size data was being saved by the
tracker.
pupil_measure1_type
Coordinate space type being used for left_pupil_measure_1.
ppd_x
Horizontal pixels per visual degree for this eye position as reported by the eye tracker.
ppd_y
Vertical pixels per visual degree for this eye position as reported by the eye tracker.
velocity_x
Horizontal velocity of the eye at the time of the sample; as reported by the eye tracker.
velocity_y
Vertical velocity of the eye at the time of the sample; as reported by the eye tracker.
velocity_xy
2D Velocity of the eye at the time of the sample; as reported by the eye tracker.
status
Indicates if eye sample contains ‘valid’ data. 0 = Eye sample is OK. 2 = Eye sample is invalid.
class psychopy.iohub.devices.eyetracker.BinocularEyeSampleEvent(object)
The BinocularEyeSampleEvent event represents the eye position and eye attribute data collected from one frame
or reading of an eye tracker device that is recording both eyes of a participant.
Event Type ID: EventConstants.BINOCULAR_EYE_SAMPLE
Event Type String: ‘BINOCULAR_EYE_SAMPLE’
time
time of event, in sec.msec format, using psychopy timebase.
left_gaze_x
The horizontal position of the left eye on the computer screen, in Display Coordinate Type Units. Calibra-
tion must be done prior to reading (meaningful) gaze data.
left_gaze_y
The vertical position of the left eye on the computer screen, in Display Coordinate Type Units. Calibration
must be done prior to reading (meaningful) gaze data.
left_angle_x
The horizontal angle of left eye the relative to the head.
left_angle_y
The vertical angle of left eye the relative to the head.
left_raw_x
The uncalibrated x position of the left eye in a device specific coordinate space.
left_raw_y
The uncalibrated y position of the left eye in a device specific coordinate space.
left_pupil_measure_1
Left eye pupil diameter.
left_pupil_measure1_type
Coordinate space type being used for left_pupil_measure_1.
left_ppd_x
Pixels per degree for left eye horizontal position as reported by the eye tracker. Display distance must be
correctly set for this to be accurate at all.
left_ppd_y
Pixels per degree for left eye vertical position as reported by the eye tracker. Display distance must be
correctly set for this to be accurate at all.
left_velocity_x
Horizontal velocity of the left eye at the time of the sample; as reported by the eye tracker.
left_velocity_y
Vertical velocity of the left eye at the time of the sample; as reported by the eye tracker.
left_velocity_xy
2D Velocity of the left eye at the time of the sample; as reported by the eye tracker.
right_gaze_x
The horizontal position of the right eye on the computer screen, in Display Coordinate Type Units. Cali-
bration must be done prior to reading (meaningful) gaze data.
right_gaze_y
The vertical position of the right eye on the computer screen, in Display Coordinate Type Units. Calibration
must be done prior to reading (meaningful) gaze data.
right_angle_x
The horizontal angle of right eye the relative to the head.
right_angle_y
The vertical angle of right eye the relative to the head.
right_raw_x
The uncalibrated x position of the right eye in a device specific coordinate space.
right_raw_y
The uncalibrated y position of the right eye in a device specific coordinate space.
right_pupil_measure_1
Right eye pupil diameter.
right_pupil_measure1_type
Coordinate space type being used for right_pupil_measure1_type.
right_ppd_x
Pixels per degree for right eye horizontal position as reported by the eye tracker. Display distance must be
correctly set for this to be accurate at all.
right_ppd_y
Pixels per degree for right eye vertical position as reported by the eye tracker. Display distance must be
correctly set for this to be accurate at all.
right_velocity_x
Horizontal velocity of the right eye at the time of the sample; as reported by the eye tracker.
right_velocity_y
Vertical velocity of the right eye at the time of the sample; as reported by the eye tracker.
right_velocity_xy
2D Velocity of the right eye at the time of the sample; as reported by the eye tracker.
status
Indicates if eye sample contains ‘valid’ data for left and right eyes. 0 = Eye sample is OK. 2 = Right eye
data is likely invalid. 20 = Left eye data is likely invalid. 22 = Eye sample is likely invalid.
Fixation Events
Successful eye tracker calibration must be performed prior to reading (meaningful) fixation event data.
class psychopy.iohub.devices.eyetracker.FixationStartEvent(object)
A FixationStartEvent is generated when the beginning of an eye fixation ( in very general terms, a period of
relatively stable eye position ) is detected by the eye trackers sample parsing algorithms.
Event Type ID: EventConstants.FIXATION_START
Event Type String: ‘FIXATION_START’
time
time of event, in sec.msec format, using psychopy timebase.
eye
Eye that generated the event. Either EyeTrackerConstants.LEFT_EYE or EyeTrackerCon-
stants.RIGHT_EYE.
gaze_x
Horizontal gaze position at the start of the event, in Display Coordinate Type Units.
gaze_y
Vertical gaze position at the start of the event, in Display Coordinate Type Units.
angle_x
Horizontal eye angle at the start of the event.
angle_y
Vertical eye angle at the start of the event.
pupil_measure_1
Pupil size. Use pupil_measure1_type to determine what type of pupil size data was being saved by the
tracker.
pupil_measure1_type
EyeTrackerConstants.PUPIL_AREA
ppd_x
Horizontal pixels per degree at start of event.
ppd_y
Vertical pixels per degree at start of event.
velocity_xy
2D eye velocity at the start of the event.
status
Event status as reported by the eye tracker.
class psychopy.iohub.devices.eyetracker.FixationEndEvent(object)
A FixationEndEvent is generated when the end of an eye fixation ( in very general terms, a period of relatively
stable eye position ) is detected by the eye trackers sample parsing algorithms.
Event Type ID: EventConstants.FIXATION_END
Event Type String: ‘FIXATION_END’
time
time of event, in sec.msec format, using psychopy timebase.
eye
Eye that generated the event. Either EyeTrackerConstants.LEFT_EYE or EyeTrackerCon-
stants.RIGHT_EYE.
duration
Duration of the event in sec.msec format.
start_gaze_x
Horizontal gaze position at the start of the event, in Display Coordinate Type Units.
start_gaze_y
Vertical gaze position at the start of the event, in Display Coordinate Type Units.
start_angle_x
Horizontal eye angle at the start of the event.
start_angle_y
Vertical eye angle at the start of the event.
start_pupil_measure_1
Pupil size at the start of the event.
start_pupil_measure1_type
EyeTrackerConstants.PUPIL_AREA
start_ppd_x
Horizontal pixels per degree at start of event.
start_ppd_y
Vertical pixels per degree at start of event.
start_velocity_xy
2D eye velocity at the start of the event.
end_gaze_x
Horizontal gaze position at the end of the event, in Display Coordinate Type Units.
end_gaze_y
Vertical gaze position at the end of the event, in Display Coordinate Type Units.
end_angle_x
Horizontal eye angle at the end of the event.
end_angle_y
Vertical eye angle at the end of the event.
end_pupil_measure_1
Pupil size at the end of the event.
end_pupil_measure1_type
EyeTrackerConstants.PUPIL_AREA
end_ppd_x
Horizontal pixels per degree at end of event.
end_ppd_y
Vertical pixels per degree at end of event.
end_velocity_xy
2D eye velocity at the end of the event.
average_gaze_x
Average horizontal gaze position during the event, in Display Coordinate Type Units.
average_gaze_y
Average vertical gaze position during the event, in Display Coordinate Type Units.
average_angle_x
Average horizontal eye angle during the event,
average_angle_y
Average vertical eye angle during the event,
average_pupil_measure_1
Average pupil size during the event.
average_pupil_measure1_type
EyeTrackerConstants.PUPIL_AREA
average_velocity_xy
Average 2D velocity of the eye during the event.
peak_velocity_xy
Peak 2D velocity of the eye during the event.
status
Event status as reported by the eye tracker.
Saccade Events
Successful eye tracker calibration must be performed prior to reading (meaningful) saccade event data.
class psychopy.iohub.devices.eyetracker.SaccadeStartEvent(object)
time
time of event, in sec.msec format, using psychopy timebase.
eye
Eye that generated the event. Either EyeTrackerConstants.LEFT_EYE or EyeTrackerCon-
stants.RIGHT_EYE.
gaze_x
Horizontal gaze position at the start of the event, in Display Coordinate Type Units.
gaze_y
Vertical gaze position at the start of the event, in Display Coordinate Type Units.
angle_x
Horizontal eye angle at the start of the event.
angle_y
Vertical eye angle at the start of the event.
pupil_measure_1
Pupil size. Use pupil_measure1_type to determine what type of pupil size data was being saved by the
tracker.
pupil_measure1_type
EyeTrackerConstants.PUPIL_AREA
ppd_x
Horizontal pixels per degree at start of event.
ppd_y
Vertical pixels per degree at start of event.
velocity_xy
2D eye velocity at the start of the event.
status
Event status as reported by the eye tracker.
class psychopy.iohub.devices.eyetracker.SaccadeEndEvent(object)
time
time of event, in sec.msec format, using psychopy timebase.
eye
Eye that generated the event. Either EyeTrackerConstants.LEFT_EYE or EyeTrackerCon-
stants.RIGHT_EYE.
duration
Duration of the event in sec.msec format.
start_gaze_x
Horizontal gaze position at the start of the event, in Display Coordinate Type Units.
start_gaze_y
Vertical gaze position at the start of the event, in Display Coordinate Type Units.
start_angle_x
Horizontal eye angle at the start of the event.
start_angle_y
Vertical eye angle at the start of the event.
start_pupil_measure_1
Pupil size at the start of the event.
start_pupil_measure1_type
EyeTrackerConstants.PUPIL_AREA
start_ppd_x
Horizontal pixels per degree at start of event.
start_ppd_y
Vertical pixels per degree at start of event.
start_velocity_xy
2D eye velocity at the start of the event.
end_gaze_x
Horizontal gaze position at the end of the event, in Display Coordinate Type Units.
end_gaze_y
Vertical gaze position at the end of the event, in Display Coordinate Type Units.
end_angle_x
Horizontal eye angle at the end of the event.
end_angle_y
Vertical eye angle at the end of the event.
end_pupil_measure_1
Pupil size at the end of the event.
end_pupil_measure1_type
EyeTrackerConstants.PUPIL_AREA
end_ppd_x
Horizontal pixels per degree at end of event.
end_ppd_y
Vertical pixels per degree at end of event.
end_velocity_xy
2D eye velocity at the end of the event.
average_gaze_x
Average horizontal gaze position during the event, in Display Coordinate Type Units.
average_gaze_y
Average vertical gaze position during the event, in Display Coordinate Type Units.
average_angle_x
Average horizontal eye angle during the event,
average_angle_y
Average vertical eye angle during the event,
average_pupil_measure_1
Average pupil size during the event.
average_pupil_measure1_type
EyeTrackerConstants.PUPIL_AREA
average_velocity_xy
Average 2D velocity of the eye during the event.
peak_velocity_xy
Peak 2D velocity of the eye during the event.
status
Event status as reported by the eye tracker.
Blink Events
class psychopy.iohub.devices.eyetracker.BlinkStartEvent(object)
time
time of event, in sec.msec format, using psychopy timebase.
eye
Eye that generated the event. Either EyeTrackerConstants.LEFT_EYE or EyeTrackerCon-
stants.RIGHT_EYE.
status
Event status as reported by the eye tracker.
class psychopy.iohub.devices.eyetracker.BlinkEndEvent(object)
time
time of event, in sec.msec format, using psychopy timebase.
eye
Eye that generated the event. Either EyeTrackerConstants.LEFT_EYE or EyeTrackerCon-
stants.RIGHT_EYE.
duration
Blink duration, in sec.msec format.
status
Event status as reported by the eye tracker.
#
streamEvents: True
˓→BlinkStartEvent, BlinkEndEvent]
calibration:
# IMPORTANT: Note that while the gaze position data provided by ioHub
# will be in the Display's coordinate system, the EyeLink internally
# always uses a 0,0 pixel_width, pixel_height coordinate system
# since internally calibration point positions are given as integers,
# so if the actual display coordinate system was passed to EyeLink,
# coordinate types like deg and norm would become very coarse in
# possible target locations during calibration.
# actual eye tracking device is not available, for example stimulus presentation
# or other non eye tracker dependent experiment functionality.
#
enable_interface_without_connection: False
runtime_settings:
# sampling_rate: Specify the desired sampling rate to use. Actual
# sample rates depend on the model being used.
# Overall, possible rates are 250, 500, 1000, and 2000 Hz.
#
sampling_rate: 250
# The sample filter section can contain multiple key : value entries if
# the tracker implementation supports it, where each key is a sample stream
˓→type,
# and each value is the accociated filter level for that sample data stream.
# sr_research.eyelink.EyeTracker supported stream types are:
(continues on next page)
# the missing filter type will have filter level set to FILTER_OFF.
#
sample_filtering:
FILTER_ALL: FILTER_LEVEL_OFF
vog_settings:
# pupil_measure_types: sr_research.eyelink.EyeTracker supports one
# pupil_measure_type parameter that is used for all eyes being tracked.
# Valid options are:
# PUPIL_AREA, PUPIL_DIAMETER,
#
pupil_measure_types: PUPIL_AREA
# tracking_mode: Define whether the eye tracker should run in a pupil only
# mode or run in a pupil-cr mode. Valid options are:
# PUPIL_CR_TRACKING, PUPIL_ONLY_TRACKING
# Depending on other settngs on the eyelink Host and the model and mode
˓→ of
# eye tracker being used, this parameter may not be able to set the
# specified tracking mode. CHeck the mode listed on the camera setup
# screen of the Host PC after the experiment has started to confirm if
# the requested tracking mode was enabled. IMPORTANT: only use
# PUPIL_ONLY_TRACKING mode if using an EyeLink II system, or using
# the EyeLink 1000 is a head **fixed** setup. Any head movement
# when using PUPIL_ONLY_TRACKING will result in eye position signal
˓→ drift.
#
tracking_mode: PUPIL_CR_TRACKING
# ELLIPSE_FIT, or CENTROID_FIT
#
pupil_center_algorithm: ELLIPSE_FIT
# model_name: The model_name setting allows the definition of the eye tracker
˓→ model being used.
# For the eyelink implementation, valid values are:
# 'EYELINK 1000 DESKTOP', 'EYELINK 1000 TOWER', 'EYELINK 1000 REMOTE',
# 'EYELINK 1000 LONG RANGE', 'EYELINK 2'
model_name: EYELINK 1000 DESKTOP
# model_name: The below parameters are not used by the EyeGaze eye tracker
# implementation, so they can be left as is, or filled out for FYI only.
#
model_name: N/A
# serial_number: The serial number for the specific isnstance of device used
# can be specified here. It is not used by the ioHub, so is FYI only.
#
serial_number: N/A
Tobii
Platforms:
• Windows 7 / 10
• Linux (not tested)
• macOS (not tested)
Required Python Version:
• Python 3.6
Supported Models:
Any Tobii model that supports screen based calibration and can used the tobii_research API. Tested using a Tobii
T120.
To use the ioHub interface for Tobii, the Tobi Pro SDK must be installed in your Python environment. If a recent
standalone installation of PsychoPy3, this package should already be included.
To install tobii-research type:
pip install tobii-research
EyeTracker Class
class psychopy.iohub.devices.eyetracker.hw.tobii.EyeTracker
To start iohub with a Tobii eye tracker device, add the Tobii device to the dictionary passed to launchHubServer
or the experiment’s iohub_config.yaml:
eyetracker.hw.tobii.EyeTracker
Examples
iohub_config = {'eyetracker.hw.tobii.EyeTracker':
{'name': 'tracker', 'runtime_settings': {'sampling_rate': 120}}}
io = launchHubServer(**iohub_config)
stime = getTime()
while getTime()-stime < 2.0:
for e in tracker.getEvents():
print(e)
tracker.setRecordingState(False)
Returns True == the device is recording data; False == Recording is not occurring
Return type bool
runSetupProcedure()
runSetupProcedure performs a calibration routine for the Tobii eye tracking system.
setRecordingState(recording)
setRecordingState is used to start or stop the recording of data from the eye tracking device.
Parameters recording (bool) – if True, the eye tracker will start recordng available eye
data and sending it to the experiment program if data streaming was enabled for the device.
If recording == False, then the eye tracker stops recording eye data and streaming it to the
experiment.
If the eye tracker is already recording, and setRecordingState(True) is called, the eye tracker will simple
continue recording and the method call is a no-op. Likewise if the system has already stopped recording
and setRecordingState(False) is called again.
Parameters recording (bool) – if True, the eye tracker will start recordng data.; false =
stop recording data.
Returns the current recording state of the eye tracking device
Return type bool
left_eye_cam_z
The left z eye position in the eye trackers 3D coordinate space. Uses tobii_research gaze data
‘left_gaze_origin_in_trackbox_coordinate_system’[2] field.
left_pupil_measure_1
Left eye pupil diameter in mm. Uses tobii_research gaze data ‘left_pupil_diameter’ field.
right_gaze_x
The horizontal position of the right eye on the computer screen, in Display Coordinate Type Units.
Calibration must be done prior to reading (meaningful) gaze data. Uses tobii_research gaze data
‘right_gaze_point_on_display_area’[0] field.
right_gaze_y
The vertical position of the right eye on the computer screen, in Display Coordinate Type Units.
Calibration must be done prior to reading (meaningful) gaze data. Uses tobii_research gaze data
‘right_gaze_point_on_display_area’[1] field.
right_eye_cam_x
The right x eye position in the eye trackers 3D coordinate space. Uses tobii_research gaze data
‘right_gaze_origin_in_trackbox_coordinate_system’[0] field.
right_eye_cam_y
The right y eye position in the eye trackers 3D coordinate space. Uses tobii_research gaze data
‘right_gaze_origin_in_trackbox_coordinate_system’[1] field.
right_eye_cam_z
The right z eye position in the eye trackers 3D coordinate space. Uses tobii_research gaze data
‘right_gaze_origin_in_trackbox_coordinate_system’[2] field.
right_pupil_measure_1
Right eye pupil diameter in mm. Uses tobii_research gaze data ‘right_pupil_diameter’ field.
status
Indicates if eye sample contains ‘valid’ data for left and right eyes. 0 = Eye sample is OK. 2 = Right eye
data is likely invalid. 20 = Left eye data is likely invalid. 22 = Eye sample is likely invalid.
eyetracker.hw.tobii.EyeTracker:
# Indicates if the device should actually be loaded at experiment runtime.
enable: True
# The variable name of the device that will be used to access the ioHub Device
˓→ class
# during experiment run-time, via the devices.[name] attribute of the ioHub
# connection or experiment runtime class.
name: tracker
# Should eye tracker events be saved to the ioHub DataStore file when the device
# is recording data ?
save_events: True
# Should eye tracker events be sent to the Experiment process when the device
# is recording data ?
stream_events: True
# How many eye events (including samples) should be saved in the ioHub event
˓→ buffer before
(continues on next page)
event_buffer_length: 1024
# The Tobii implementation of the common eye tracker interface supports the
# BinocularEyeSampleEvent event type.
monitor_event_types: [ BinocularEyeSampleEvent,]
# The model name of the Tobii device that you wish to connect to can be specified
˓→here,
# and only Tobii systems matching that model name will be considered as possible
˓→candidates for connection.
# If you only have one Tobii system connected to the computer, this field can
˓→just be left empty.
model_name:
# The serial number of the Tobii device that you wish to connect to can be
˓→specified here,
# and only the Tobii system matching that serial number will be connected to, if
˓→found.
# If you only have one Tobii system connected to the computer, this field can
˓→just be left empty,
# in which case the first Tobii device found will be connected to.
serial_number:
calibration:
# Should the PsychoPy Window created by the PsychoPy Process be minimized
# before displaying the Calibration Window created by the ioHub Process.
#
minimize_psychopy_win: False
runtime_settings:
# The supported sampling rates for Tobii are model dependent.
# Using a defualt of 60 Hz, with the assumption it is the most common.
sampling_rate: 60
# manufacturer_name is used to store the name of the maker of the eye tracking
# device. This is for informational purposes only.
manufacturer_name: Tobii Technology
Computer Specifications
The design / requirements of your experiment itself can obviously influence what the minimum computer specification
should be to provide good timing / performance.
The dual process design when running using psychopy.iohub also influences the minimum suggested specifications as
follows:
• Intel i5 or i7 CPU. A minimum of two CPU cores is needed.
• 8 GB of RAM
• Windows 7 +, OS X 10.7.5 +, or Linux Kernel 2.6 +
Please see the Recommended hardware section for further information that applies to PsychoPy in general.
Usage Considerations
Software Requirements
When running PsychoPy using the macOS or Windows standalone distribution, all the necessary python package
dependencies have already been installed, so the rest of this section can be skipped.
Note: Hardware specific software may need to be installed depending on the device being used. See the documenta-
tion page for the specific device hardware in question for further details.
If psychopy.iohub is being manually installed, first ensure the python packages listed in the dependencies section of
the manual are installed.
psychopy.iohub requires the following extra dependencies to be installed:
1. psutil (version 1.2 +) A cross-platform process and system utilities module for Python.
2. msgpack It’s like JSON. but fast and small.
3. greenlet The greenlet package is a spin-off of Stackless, a version of CPython that supports micro-threads called
“tasklets”.
4. gevent (version 1.0 or greater)** A coroutine-based Python networking library.
5. numexpr Fast numerical array expression evaluator for Python and NumPy.
6. pytables PyTables is a package for managing hierarchical datasets.
7. pyYAML PyYAML is a YAML parser and emitter for Python.
Windows installations only
1. pyHook Python wrapper for global input hooks in Windows.
Linux installations only
1. python-xlib The Python X11R6 client-side implementation.
OSX installations only
1. pyobjc : A Python ObjectiveC binding.
Provides functions for logging error and other messages to one or more files and/or the console, using python’s own
logging module. Some warning messages and error messages are generated by PsychoPy itself. The user can generate
more using the functions in this module.
There are various levels for logged messages with the following order of importance: ERROR, WARNING, DATA,
EXP, INFO and DEBUG.
When setting the level for a particular log target (e.g. LogFile) the user can set the minimum level that is required
for messages to enter the log. For example, setting a level of INFO will result in INFO, EXP, DATA, WARNING and
ERROR messages to be recorded but not DEBUG messages.
By default, PsychoPy will record messages of WARNING level and above to the console. The user can silence that by
setting it to receive only CRITICAL messages, (which PsychoPy doesn’t use) using the commands:
psychopy.logging.addLevel(level, levelName)
Associate ‘levelName’ with ‘level’.
This is used when converting levels to text during message formatting.
psychopy.logging.critical(message)
Send the message to any receiver of logging info (e.g. a LogFile) of level log.CRITICAL or higher
psychopy.logging.data(msg, t=None, obj=None)
Log a message about data collection (e.g. a key press)
usage:: log.data(message)
Sends the message to any receiver of logging info (e.g. a LogFile) of level log.DATA or higher
psychopy.logging.debug(msg, t=None, obj=None)
Log a debugging message (not likely to be wanted once experiment is finalised)
usage:: log.debug(message)
Sends the message to any receiver of logging info (e.g. a LogFile) of level log.DEBUG or higher
psychopy.logging.error(message)
Send the message to any receiver of logging info (e.g. a LogFile) of level log.ERROR or higher
psychopy.logging.exp(msg, t=None, obj=None)
Log a message about the experiment (e.g. a new trial, or end of a stimulus)
usage:: log.exp(message)
Sends the message to any receiver of logging info (e.g. a LogFile) of level log.EXP or higher
psychopy.logging.fatal(msg, t=None, obj=None)
log.critical(message) Send the message to any receiver of logging info (e.g. a LogFile) of level log.CRITICAL
or higher
psychopy.logging.flush(logger=<psychopy.logging._Logger object>)
Send current messages in the log to all targets
psychopy.logging.getLevel(level)
Return the textual representation of logging level ‘level’.
If the level is one of the predefined levels (CRITICAL, ERROR, WARNING, INFO, DEBUG) then you get the
corresponding string. If you have associated levels with names using addLevelName then the name you have
associated with ‘level’ is returned.
If a numeric value corresponding to one of the defined levels is passed in, the corresponding string representation
is returned.
Otherwise, the string “Level %s” % level is returned.
psychopy.logging.info(msg, t=None, obj=None)
Log some information - maybe useful, maybe not
usage:: log.info(message)
Sends the message to any receiver of logging info (e.g. a LogFile) of level log.INFO or higher
psychopy.logging.log(msg, level, t=None, obj=None)
Log a message
usage:: log(level, msg, t=t, obj=obj)
Log the msg, at a given level on the root logger
psychopy.logging.setDefaultClock(clock)
Set the default clock to be used to reference all logging times. Must be a psychopy.core.Clock object.
Beware that if you reset the clock during the experiment then the resets will be reflected here. That might be
useful if you want your logs to be reset on each trial, but probably not.
psychopy.logging.warn(msg, t=None, obj=None)
log.warning(message)
Sends the message to any receiver of logging info (e.g. a LogFile) of level log.WARNING or higher
psychopy.logging.warning(message)
Sends the message to any receiver of logging info (e.g. a LogFile) of level log.WARNING or higher
8.12.1 flush()
psychopy.logging.flush(logger=<psychopy.logging._Logger object>)
Send current messages in the log to all targets
8.12.2 setDefaultClock()
psychopy.logging.setDefaultClock(clock)
Set the default clock to be used to reference all logging times. Must be a psychopy.core.Clock object.
Beware that if you reset the clock during the experiment then the resets will be reflected here. That might be
useful if you want your logs to be reset on each trial, but probably not.
8.13.1 Overview
AudioCapture() allows easy audio recording and saving of arbitrary sounds to a file (wav format). AudioCapture will
likely be replaced entirely by AdvAudioCapture in the near future.
AdvAudioCapture() can do everything AudioCapture does, and also allows onset-marker sound insertion and de-
tection, loudness computation (RMS audio “power”), and lossless file compression (flac). The Builder microphone
component now uses AdvAudioCapture by default.
pyo’s downsamp() function can reduce 48,000 to 16,000 in about 0.02s (uses integer steps sizes). So recording
at 48kHz will generate high-quality archival data, and permit easy downsampling.
outputDevice, bufferSize: set these parameters on the pyoSndServer before booting; None means use
pyo’s default values
class psychopy.microphone.AdvAudioCapture(name=’advMic’, filename=”, saveDir=”, sam-
pletype=0, buffering=16, chnl=0, stereo=True,
autoLog=True)
Class extends AudioCapture, plays marker sound as a “start” indicator.
Has method for retrieving the marker onset time from the file, to allow calculation of vocal RT (or other sound-
based RT).
See Coder demo > input > latencyFromTone.py
compress(keep=False)
Compress using FLAC (lossless compression).
getLoudness()
Return the RMS loudness of the saved recording.
getMarkerInfo()
Returns (hz, duration, volume) of the marker sound. Custom markers always return 0 hz (regardless of the
sound).
getMarkerOnset(chunk=128, secs=0.5, filename=”)
Return (onset, offset) time of the first marker within the first secs of the saved recording.
Has approx ~1.33ms resolution at 48000Hz, chunk=64. Larger chunks can speed up processing times, at
a sacrifice of some resolution, e.g., to pre-process long recordings with multiple markers.
If given a filename, it will first set that file as the one to work with, and then try to detect the onset marker.
playMarker()
Plays the current marker sound. This is automatically called at the start of recording, but can be called
anytime to insert a marker.
playback(block=True, loops=0, stop=False, log=True)
Plays the saved .wav file, as just recorded or resampled. Execution blocks by default, but can return
immediately with block=False.
loops : number of extra repetitions; 0 = play once
stop : True = immediately stop ongoing playback (if there is one), and return
record(sec, filename=”, block=False)
Starts recording and plays an onset marker tone just prior to returning. The idea is that the start of the tone
in the recording indicates when this method returned, to enable you to sync a known recording onset with
other events.
resample(newRate=16000, keep=True, log=True)
Re-sample the saved file to a new rate, return the full path.
Can take several visual frames to resample a 2s recording.
The default values for resample() are for Google-speech, keeping the original (presumably recorded at
48kHz) to archive. A warning is generated if the new rate not an integer factor / multiple of the old rate.
To control anti-aliasing, use pyo.downsamp() or upsamp() directly.
reset(log=True)
Restores to fresh state, ready to record again
setFile(filename)
Sets the name of the file to work with.
setMarker(tone=19000, secs=0.015, volume=0.03, log=True)
Sets the onset marker, where tone is either in hz or a custom sound.
The default tone (19000 Hz) is recommended for auto-detection, as being easier to isolate from speech
sounds (and so reliable to detect). The default duration and volume are appropriate for a quiet setting
such as a lab testing room. A louder volume, longer duration, or both may give better results when
recording loud sounds or in noisy environments, and will be auto-detected just fine (even more easily). If
the hardware microphone in use is not physically near the speaker hardware, a louder volume is likely to
be required.
Custom sounds cannot be auto-detected, but are supported anyway for presentation purposes. E.g., a
recording of someone saying “go” or “stop” could be passed as the onset marker.
stop(log=True)
Interrupt a recording that is in progress; close & keep the file.
Ends the recording before the duration that was initially specified. The same file name is retained, with the
same onset time but a shorter duration.
The same recording cannot be resumed after a stop (it is not a pause), but you can start a new one.
uncompress(keep=False)
Uncompress from FLAC to .wav format.
Google’s speech to text API is no longer available. AT&T, IBM, and wit.ai have a similar (paid) service.
8.13.4 Misc
Functions for file-oriented Discrete Fourier Transform and RMS computation are also provided.
psychopy.microphone.wav2flac(path, keep=True, level=5)
Lossless compression: convert .wav file (on disk) to .flac format.
If path is a directory name, convert all .wav files in the directory.
keep to retain the original .wav file(s), default True.
level is compression level: 0 is fastest but larger, 8 is slightly smaller but much slower.
psychopy.microphone.flac2wav(path, keep=True)
Uncompress: convert .flac file (on disk) to .wav format (new file).
If path is a directory name, convert all .flac files in the directory.
keep to retain the original .flac file(s), default True.
psychopy.microphone.getDft(data, sampleRate=None, wantPhase=False)
Compute and return magnitudes of numpy.fft.fft() of the data.
If given a sample rate (samples/sec), will return (magn, freq). If wantPhase is True, phase in radians is also
returned (magn, freq, phase). data should have power-of-2 samples, or will be truncated.
psychopy.microphone.getRMS(data)
Compute and return the audio power (“loudness”).
Uses numpy.std() as RMS. std() is same as RMS if the mean is 0, and .wav data should have a mean of 0.
Returns an array if given stereo data (RMS computed within-channel).
data can be an array (1D, 2D) or filename; .wav format only. data from .wav files will be normalized to -1..+1
before RMS is computed.
convertToPix(vertices, pos, units, win) Takes vertices and position, combines and converts to
pixels from any unit
cm2pix(cm, monitor) Convert size in cm to size in pixels for a given Monitor
object.
cm2deg(cm, monitor[, correctFlat]) Convert size in cm to size in degrees for a given Monitor
object
deg2cm(degrees, monitor[, correctFlat]) Convert size in degrees to size in pixels for a given Mon-
itor object.
deg2pix(degrees, monitor[, correctFlat]) Convert size in degrees to size in pixels for a given Mon-
itor object
pix2cm(pixels, monitor) Convert size in pixels to size in cm for a given Monitor
object
pix2deg(pixels, monitor[, correctFlat]) Convert size in pixels to size in degrees for a given Mon-
itor object
float_uint8(inarray) Converts arrays, lists, tuples and floats ranging -1:1 into
an array of Uint8s ranging 0:255
uint8_float(inarray) Converts arrays, lists, tuples and UINTs ranging 0:255
into an array of floats ranging -1:1
float_uint16(inarray) Converts arrays, lists, tuples and floats ranging -1:1 into
an array of Uint16s ranging 0:2^16
Most users won’t need to use the code here. In general the Monitor Centre interface is sufficient and monitors setup
that way can be passed as strings to Window s. If there is some aspect of the normal calibration that you wish to
override. eg:
You might also want to fetch the Photometer class for conducting your own calibrations
8.15.1 Monitor
8.15. psychopy.monitors - for those that don’t like Monitor Center 337
PsychoPy - Psychology software for Python, Release 3.2.0
getDKL_RGB(RECOMPUTE=False)
Returns the DKL->RGB conversion matrix. If one has been saved this will be returned. Otherwise, if
power spectra are available for the monitor a matrix will be calculated.
getDistance()
Returns distance from viewer to the screen in cm, or None if not known
getGamma()
Returns just the gamma value (not the whole grid)
getGammaGrid()
Gets the min,max,gamma values for the each gun
getLMS_RGB(recompute=False)
Returns the LMS->RGB conversion matrix. If one has been saved this will be returned. Otherwise (if
power spectra are available for the monitor) a matrix will be calculated.
getLevelsPost()
Gets the measured luminance values from last calibration TEST
getLevelsPre()
Gets the measured luminance values from last calibration
getLinearizeMethod()
Gets the method that this monitor is using to linearize the guns
getLumsPost()
Gets the measured luminance values from last calibration TEST
getLumsPre()
Gets the measured luminance values from last calibration
getMeanLum()
Returns the mean luminance of the screen if explicitly stored
getNotes()
Notes about the calibration
getPsychopyVersion()
Returns the version of PsychoPy that was used to create this calibration
getSizePix()
Returns the size of the current calibration in pixels, or None if not defined
getSpectra()
Gets the wavelength values from the last spectrometer measurement (if available)
usage:
• nm, power = monitor.getSpectra()
getUseBits()
Was this calibration carried out witha a bits++ box
getWidth()
Of the viewable screen in cm, or None if not known
lineariseLums(desiredLums, newInterpolators=False, overrideGamma=None)
Equivalent of linearizeLums().
linearizeLums(desiredLums, newInterpolators=False, overrideGamma=None)
lums should be uncalibrated luminance values (e.g. a linear ramp) ranging 0:1
8.15. psychopy.monitors - for those that don’t like Monitor Center 339
PsychoPy - Psychology software for Python, Release 3.2.0
setLumsPre(lums)
Sets the last set of luminance values measured during calibration
setMeanLum(meanLum)
Records the mean luminance (for reference only)
setNotes(notes)
For you to store notes about the calibration
setPsychopyVersion(version)
To store the version of PsychoPy that this calibration used
setSizePix(pixels)
Set the size of the screen in pixels x,y
setSpectra(nm, rgb)
Sets the phosphor spectra measured by the spectrometer
setUseBits(usebits)
DEPRECATED: Use the new hardware classes to control these devices
setWidth(width)
Of the viewable screen (cm)
8.15.2 GammaCalculator
8.15.3 getAllMonitors()
psychopy.monitors.getAllMonitors()
Find the names of all monitors for which calibration files exist
8.15.4 findPR650()
8.15.5 getLumSeriesPR650()
8.15.6 getRGBspectra()
Params
• ‘photometer’ could be a photometer object or a serial port
name on which a photometer might be found (not recommended)
8.15.7 gammaFun()
8.15.8 gammaInvFun()
8.15. psychopy.monitors - for those that don’t like Monitor Center 341
PsychoPy - Psychology software for Python, Release 3.2.0
8.15.9 makeDKL2RGB()
psychopy.monitors.makeDKL2RGB(nm, powerRGB)
Creates a 3x3 DKL->RGB conversion matrix from the spectral input powers
8.15.10 makeLMS2RGB()
psychopy.monitors.makeLMS2RGB(nm, powerRGB)
Creates a 3x3 LMS->RGB conversion matrix from the spectral input powers
This module provides read / write access to the parallel port for Linux or Windows.
The Parallel class described below will attempt to load whichever parallel port driver is first found on your system
and should suffice in most instances. If you need to use a specific driver then, instead of using ParallelPort
shown below you can use one of the following as drop-in replacements, forcing the use of a specific driver:
• psychopy.parallel.PParallelInpOut
• psychopy.parallel.PParallelDLPortIO
• psychopy.parallel.PParallelLinux
Either way, each instance of the class can provide access to a different parallel port.
There is also a legacy API which consists of the routines which are directly in this module. That API assumes you
only ever want to use a single parallel port at once.
class psychopy.parallel.ParallelPort(address)
Class for read/write access to the parallel port on Windows & Linux
Usage:
port.setData(4)
port.readPin(2)
port.setPin(2, 1)
This is just a dummy constructor to avoid errors when the parallel port cannot be initiated
readData()
Return the value currently set on the data pins (2-9)
readPin(pinNumber)
Determine whether a desired (input) pin is high(1) or low(0).
Pins 2-13 and 15 are currently read here
setData(data)
Set the data to be presented on the parallel port (one ubyte). Alternatively you can set the value of each
pin (data pins are pins 2-9 inclusive) using setPin()
Examples:
We would strongly recommend you use the class above instead: these are provided for backwards compatibility only.
parallel.setPortAddress()
Set the memory address or device node for your parallel port of your parallel port, to be used in subsequent
commands
Common port addresses:
This routine will attempt to find a usable driver depending on your platform
parallel.setData()
Set the data to be presented on the parallel port (one ubyte). Alternatively you can set the value of each pin (data
pins are pins 2-9 inclusive) using setPin()
Examples:
parallel.setPin(state)
Set a desired pin to be high (1) or low (0).
Only pins 2-9 (incl) are normally used for data output:
8.16. psychopy.parallel - functions for interacting with the parallel port 343
PsychoPy - Psychology software for Python, Release 3.2.0
parallel.readPin()
Determine whether a desired (input) pin is high(1) or low(0).
Pins 2-13 and 15 are currently read here
You can set preferences on a per-experiment basis. For example, if you would like to use a specific audio library, but
don’t want to touch your user settings in general, you can import preferences and set the option audioLib accordingly:
!!IMPORTANT!! You must import the sound module AFTER setting the preferences. To check that you are getting
what you want (don’t do this in your actual experiment):
print sound.Sound
8.17.1 Preferences
import psychopy
psychopy.prefs.hardware['audioLib'] = ['pyo','pygame']
print(prefs)
# prints the location of the user prefs file and all the current vals
Use the instance of prefs, as above, rather than the Preferences class directly if you want to affect the script
that’s running.
loadAll()
Load the user prefs and the application data
loadUserPrefs()
load user prefs, if any; don’t save to a file because doing so will break easy_install. Saving to files within
the psychopy/ is fine, eg for key-bindings, but outside it (where user prefs will live) is not allowed by
easy_install (security risk)
resetPrefs()
removes userPrefs.cfg, does not touch appData.cfg
restoreBadPrefs(cfg, result)
result = result of validate
saveAppData()
Save the various setting to the appropriate files (or discard, in some cases)
saveUserPrefs()
Validate and save the various setting to the appropriate files (or discard, in some cases)
validate()
Validate (user) preferences and reset invalid settings to defaults
PsychoPy is compatible with Chris Liechti’s pyserial package. You can use it like this:
import serial
ser = serial.Serial(0, 19200, timeout=1) # open first serial port
#ser = serial.Serial('/dev/ttyS1', 19200, timeout=1)#or something like this for Mac/
˓→Linux machines
ser.write('someCommand')
line = ser.readline() # read a '\n' terminated line
ser.close()
Ports are fully configurable with all the options you would expect of RS232 communications. See http://pyserial.
sourceforge.net for further details and documentation.
pyserial is packaged in the Standalone (Windows and Mac distributions), for manual installations you should install
this yourself.
8.19.1 Sound
PsychoPy currently supports a choice of three sound libraries: pyo, sounddevice or pygame. Select which will be used
via the audioLib preference. sound.Sound() will then refer to one of SoundDevice SoundPyo or SoundPygame. This
can be set on a per-experiment basis by importing preferences, and setting the audioLib option to use.
• The pygame backend is the oldest and should always work without errors, but has the least good performance.
Use it if latencies foryour audio don’t mattter.
• The pyo library is, in theory, the highest performer, but in practice it has ften had issues (at least on macOS)
with crashes and freezing of experiments, or causing them not to finish properly. If those issues aren’t affecting
your studies then this could be the one for you.
• The sounddevice library looks like the way of the future. The performance appears to be good (although this
might be less so in cases where you have complex rendering being done as well because it operates from the
same computer core as the main experiment code). It’s newer than pyo and so more prone to bugs and we
haven’t yet added microphone support to record your participants.
Sounds are actually generated by a variety of classes, depending on which “backend” you use (like pyo or sounddevice)
and these different backends can have slightly different attributes, as below.
The user should typically do:
from psychopy.sound import Sound
but the class that gets imported will then be an alias of one of the following.
8.18. psychopy.serial - functions for interacting with the serial port 345
PsychoPy - Psychology software for Python, Release 3.2.0
stereo: True (= default, two channels left and right), False (one channel)
volume: loudness to play the sound, from 0.0 (silent) to 1.0 (max). Adjustments are not possible during
playback, only before.
loops [int] How many times to repeat the sound after it plays once. If loops == -1, the sound will repeat
indefinitely until stopped.
sampleRate (= 44100): if the psychopy.sound.init() function has been called or if another sound has already
been created then this argument will be ignored and the previous setting will be used
bits: has no effect for the pyo backend
hamming: boolean (default True) to indicate if the sound should be apodized (i.e., the onset and offset
smoothly ramped up from down to zero). The function apodize uses a Hanning window, but arguments
named ‘hamming’ are preserved so that existing code is not broken by the change from Hamming to
Hanning internally. Not applied to sounds from files.
play(loops=None, autoStop=True, log=True, when=None)
Starts playing the sound on an available channel.
loops [int] How many times to repeat the sound after it plays once. If loops == -1, the sound will repeat
indefinitely until stopped.
when: not used but included for compatibility purposes
For playing a sound file, you cannot specify the start and stop times when playing the sound, only when
creating the sound initially.
Playing a sound runs in a separate thread i.e. your code won’t wait for the sound to finish before continuing.
To pause while playing, you need to use a psychopy.core.wait(mySound.getDuration()). If you call play()
while something is already playing the sounds will be played over each other.
stop(log=True)
Stops the sound immediately
octave: is only relevant if the value is a note name. Middle octave of a piano is 4. Most com-
puters won’t output sounds in the bottom octave (1) and the top octave (8) is generally
painful
sampleRate(=44100): If a sound has already been created or if the
bits(=16): Pygame uses the same bit depth for all sounds once initialised
fadeOut(mSecs)
fades out the sound (when playing) over mSecs. Don’t know why you would do this in psychophysics but
it’s easy and fun to include as a possibility :)
getDuration()
Get’s the duration of the current sound in secs
getVolume()
Returns the current volume of the sound (0.0:1.0)
play(fromStart=True, log=True, loops=None, when=None)
Starts playing the sound on an available channel.
Parameters
fromStart [bool] Not yet implemented.
log [bool] Whether or not to log the playback event.
loops [int] How many times to repeat the sound after it plays once. If loops == -1, the sound
will repeat indefinitely until stopped.
when: not used but included for compatibility purposes
Notes If no sound channels are available, it will not play and return None. This runs off a
separate thread i.e. your code won’t wait for the sound to finish before continuing. You need
to use a psychopy.core.wait() command if you want things to pause. If you call play() whiles
something is already playing the sounds will be played over each other.
setVolume(newVol, log=True)
Sets the current volume of the sound (0.0:1.0)
stop(log=True)
Stops the sound immediately
8.20.1 psychopy.tools.colorspacetools
Function details
psychopy.tools.colorspacetools.dkl2rgb(dkl, conversionMatrix=None)
Convert from DKL color space (Derrington, Krauskopf & Lennie) to RGB.
Requires a conversion matrix, which will be generated from generic Sony Trinitron phosphors if not supplied
(note that this will not be an accurate representation of the color space unless you supply a conversion matrix).
usage:
rgb_Nx3 = hsv2rgb(hsv_Nx3)
Note that in some uses of HSV space the Hue component is given in radians or cycles (range 0:1]). In this
version H is given in degrees (0:360).
Also note that the RGB output ranges -1:1, in keeping with other PsychoPy functions.
psychopy.tools.colorspacetools.lms2rgb(lms_Nx3, conversionMatrix=None)
Convert from cone space (Long, Medium, Short) to RGB.
Requires a conversion matrix, which will be generated from generic Sony Trinitron phosphors if not supplied
(note that you will not get an accurate representation of the color space unless you supply a conversion matrix)
usage:
psychopy.tools.colorspacetools.rgb2lms(rgb_Nx3, conversionMatrix=None)
Convert from RGB to cone space (LMS).
Requires a conversion matrix, which will be generated from generic Sony Trinitron phosphors if not supplied
(note that you will not get an accurate representation of the color space unless you supply a conversion matrix)
usage:
psychopy.tools.colorspacetools.dkl2rgb(dkl, conversionMatrix=None)
Convert from DKL color space (Derrington, Krauskopf & Lennie) to RGB.
Requires a conversion matrix, which will be generated from generic Sony Trinitron phosphors if not supplied
(note that this will not be an accurate representation of the color space unless you supply a conversion matrix).
usage:
Example
8.20.2 psychopy.tools.coordinatetools
Function details
psychopy.tools.coordinatetools.cart2pol(x, y, units=’deg’)
Convert from cartesian to polar coordinates.
Usage theta, radius = cart2pol(x, y, units=’deg’)
units refers to the units (rad or deg) for theta that should be returned
psychopy.tools.coordinatetools.cart2sph(z, y, x)
Convert from cartesian coordinates (x,y,z) to spherical (elevation, azimuth, radius). Output is in degrees.
usage: array3xN[el,az,rad] = cart2sph(array3xN[x,y,z]) OR elevation, azimuth, radius = cart2sph(x,y,z)
If working in DKL space, z = Luminance, y = S and x = LM
psychopy.tools.coordinatetools.pol2cart(theta, radius, units=’deg’)
Convert from polar to cartesian coordinates.
usage:
psychopy.tools.coordinatetools.sph2cart(*args)
Convert from spherical coordinates (elevation, azimuth, radius) to cartesian (x,y,z).
usage: array3xN[x,y,z] = sph2cart(array3xN[el,az,rad]) OR x,y,z = sph2cart(elev, azim, radius)
8.20.3 psychopy.tools.filetools
psychopy.tools.filetools.toFile(filename, data)
Save data (of any sort) as a pickle file.
simple wrapper of the cPickle module in core python
psychopy.tools.filetools.fromFile(filename, encoding=’utf-8-sig’)
Load data from a pickle or JSON file.
Parameters encoding (str) – The encoding to use when reading a JSON file. This parameter
will be ignored for any other file type.
psychopy.tools.filetools.mergeFolder(src, dst, pattern=None)
Merge a folder into another.
Existing files in dst folder with the same name will be overwritten. Non-existent files/folders will be created.
psychopy.tools.filetools.openOutputFile(fileName=None, append=False, fileCollision-
Method=’rename’, encoding=’utf-8-sig’)
Open an output file (or standard output) for writing.
Parameters
fileName [None, ‘stdout’, or str] The desired output file name. If None or stdout, return sys.stdout. Any other
string will be considered a filename.
append [bool, optional] If True, append data to an existing file; otherwise, overwrite it with new data. Defaults
to True, i.e. appending.
fileCollisionMethod [string, optional] How to handle filename collisions. Valid values are ‘rename’, ‘over-
write’, and ‘fail’. This parameter is ignored if append is set to True. Defaults to rename.
encoding [string, optional] The encoding to use when writing the file. This parameter will be ignored if append
is False and fileName ends with .psydat or .npy (i.e. if a binary file is to be written). Defaults to 'utf-8'.
Returns
psychopy.tools.filetools.genDelimiter(fileName)
Return a delimiter based on a filename.
Parameters
Returns
delim [string] A delimiter picked based on the supplied filename. This will be , if the filename extension is
.csv, and a tabulator character otherwise.
8.20.4 psychopy.tools.gltools
Function details
psychopy.tools.gltools.createProgram()
Create an empty program object for shaders.
Returns OpenGL program object handle retrieved from a glCreateProgram call.
Return type int
Examples
deleteObject(vertexShader)
deleteObject(fragmentShader)
You can install the program for use in the current rendering state by calling:
useProgram(myShader) # OR glUseProgram(myShader)
# set uniforms/attributes and start drawing here ...
psychopy.tools.gltools.compileShader(shaderSrc, shaderType)
Compile shader GLSL code and return a shader object. Shader objects can then be attached to programs and
made executable on their respective processors.
Parameters
• shaderSrc (str, list of str) – GLSL shader source code.
• shaderType (GLenum) – Shader program type (eg. GL_VERTEX_SHADER,
GL_FRAGMENT_SHADER, GL_GEOMETRY_SHADER, etc.)
Returns OpenGL shader object handle retrieved from a glCreateShader call.
Return type int
Examples
void main()
{
gl_Position = vec4(vertexPos, 1.0);
}
'''
# compile it, specifying `GL_VERTEX_SHADER`
vertexShader = compileShader(vertexSource, GL.GL_VERTEX_SHADER)
attachShader(myProgram, vertexShader) # attach it to `myProgram`
psychopy.tools.gltools.deleteObject(obj)
Delete a shader or program object.
Parameters obj (int) – Shader or program object handle. Must have originated from a
createProgram(), compileShader(), glCreateProgram or glCreateShader call.
psychopy.tools.gltools.attachShader(program, shader)
Attach a shader to a program.
Parameters
• program (int) – Program handle to attach shader to. Must have originated from a
createProgram() or glCreateProgram call.
• shader (int) – Handle of shader object to attach. Must have originated from a
compileShader() or glCreateShader call.
psychopy.tools.gltools.detachShader(program, shader)
Detach a shader object from a program.
Parameters
• program (int) – Program handle to detach shader from. Must have originated from a
createProgram() or glCreateProgram call.
• shader (int) – Handle of shader object to detach. Must have been previously attached to
program.
psychopy.tools.gltools.linkProgram(program)
Link a shader program. Any attached shader objects will be made executable to run on associated GPU processor
units when the program is used.
Parameters program (int) – Program handle to link. Must have originated from a
createProgram() or glCreateProgram call.
Raises
• ValueError – Specified program handle is invalid.
• RuntimeError – Program failed to link. Log will be dumped to sterr.
psychopy.tools.gltools.validateProgram(program)
Check if the program can execute given the current OpenGL state.
Parameters program (int) – Handle of program to validate. Must have originated from a
createProgram() or glCreateProgram call.
psychopy.tools.gltools.useProgram(program)
Use a program object’s executable shader attachments in the current OpenGL rendering state.
In order to install the program object in the current rendering state, a program must have been successfully
linked by calling linkProgram() or glLinkProgram.
Parameters program (int) – Handle of program to use. Must have originated from a
createProgram() or glCreateProgram call and was successfully linked. Passing 0 or None
disables shader programs.
Examples
useProgram(myShader)
useProgram(0)
psychopy.tools.gltools.createProgramObjectARB()
Create an empty program object for shaders.
This creates an Architecture Review Board (ARB) program variant which is compatible with older GLSL ver-
sions and OpenGL coding practices (eg. fixed function) on some platforms. Use *ARB variants of shader helper
functions (eg. compileShaderObjectARB instead of compileShader) when working with these ARB program ob-
jects. This was included for legacy support of existing PsychoPy shaders. However, it is recommended that you
use createShader() and follow more recent OpenGL design patterns for new code (if possible of course).
Returns OpenGL program object handle retrieved from a glCreateProgramObjectARB call.
Return type int
Examples
deleteObjectARB(vertexShader)
deleteObjectARB(fragmentShader)
useProgramObjectARB(myProgram)
psychopy.tools.gltools.compileShaderObjectARB(shaderSrc, shaderType)
Compile shader GLSL code and return a shader object. Shader objects can then be attached to programs and
made executable on their respective processors.
Parameters
• shaderSrc (str, list of str) – GLSL shader source code text.
• shaderType (GLenum) – Shader program type. Must be *_ARB enums
such as GL_VERTEX_SHADER_ARB, GL_FRAGMENT_SHADER_ARB,
GL_GEOMETRY_SHADER_ARB, etc.
Returns OpenGL shader object handle retrieved from a glCreateShaderObjectARB call.
Return type int
psychopy.tools.gltools.deleteObjectARB(obj)
Delete a program or shader object.
Parameters obj (int) – Program handle to attach shader to. Must have origi-
nated from a createProgramObjectARB(), compileShaderObjectARB,
`glCreateProgramObjectARB() or glCreateShaderObjectARB call.
psychopy.tools.gltools.attachObjectARB(program, shader)
Attach a shader object to a program.
Parameters
• program (int) – Program handle to attach shader to. Must have originated from a
createProgramObjectARB() or glCreateProgramObjectARB call.
• shader (int) – Handle of shader object to attach. Must have originated from a
compileShaderObjectARB() or glCreateShaderObjectARB call.
psychopy.tools.gltools.detachObjectARB(program, shader)
Detach a shader object from a program.
Parameters
• program (int) – Program handle to detach shader from. Must have originated from a
createProgramObjectARB() or glCreateProgramObjectARB call.
• shader (int) – Handle of shader object to detach. Must have been previously attached to
program.
psychopy.tools.gltools.linkProgramObjectARB(program)
Link a shader program object. Any attached shader objects will be made executable to run on associated GPU
processor units when the program is used.
Parameters program (int) – Program handle to link. Must have originated from a
createProgramObjectARB() or glCreateProgramObjectARB call.
Raises
• ValueError – Specified program handle is invalid.
• RuntimeError – Program failed to link. Log will be dumped to sterr.
psychopy.tools.gltools.validateProgramARB(program)
Check if the program can execute given the current OpenGL state. If validation fails, information from the driver
is dumped giving the reason.
Parameters program (int) – Handle of program object to validate. Must have originated from a
createProgramObjectARB() or glCreateProgramObjectARB call.
psychopy.tools.gltools.useProgramObjectARB(program)
Use a program object’s executable shader attachments in the current OpenGL rendering state.
In order to install the program object in the current rendering state, a program must have been successfully
linked by calling linkProgramObjectARB() or glLinkProgramObjectARB.
Parameters program (int) – Handle of program object to use. Must have originated from a
createProgramObjectARB() or glCreateProgramObjectARB call and was successfully
linked. Passing 0 or None disables shader programs.
Examples
useProgramObjectARB(myShader)
useProgramObjectARB(0)
Notes
Some drivers may support using glUseProgram for objects created by calling
createProgramObjectARB() or glCreateProgramObjectARB.
psychopy.tools.gltools.getInfoLog(obj)
Get the information log from a shader or program.
This retrieves a text log from the driver pertaining to the shader or program. For instance, a log can report shader
compiler output or validation results. The verbosity and formatting of the logs are platform-dependent, where
one driver may provide more information than another.
This function works with both standard and ARB program object variants.
Parameters obj (int) – Program or shader to retrieve a log from. If a shader, the handle must have
originated from a compileShader(), glCreateShader, createProgramObjectARB()
or glCreateProgramObjectARB call. If a program, the handle must have came from a
createProgram(), createProgramObjectARB(), glCreateProgram or glCreatePro-
gramObjectARB call.
Returns Information log data. Logs can be empty strings if the driver has no information available.
Return type str
psychopy.tools.gltools.getUniformLocations(program, builtins=False)
Get uniform names and locations from a given shader program object.
This function works with both standard and ARB program object variants.
Parameters
• program (int) – Handle of program to retrieve uniforms. Must have originated from
a createProgram(), createProgramObjectARB(), glCreateProgram or glCre-
ateProgramObjectARB call.
• builtins (bool, optional) – Include built-in GLSL uniforms (eg.
gl_ModelViewProjectionMatrix). Default is False.
Returns Uniform names and locations.
Return type dict
Examples
Get the location uniform modelMatrix in myShader and set it matrix using a Numpy array:
uniforms = getUniformLocations(myShader)
useProgram(myShader)
glUniformMatrix4fv(
uniforms['modelMatrix'],
1,
GL_TRUE, # transpose, since Numpy matrices are row-major in memory
modelMatrix)
You can check if a shader has a uniform before setting it. This allows for the same sub-routine to flexibly handle
different shader types, as long as the uniform variables have the same names and types:
# get the uniform names and locations. In the shader, we have defined
# `uniform vec4 specularColor`.
uniforms = getUniformLocations(myShader)
hasSpecularColor = 'specularColor' in uniforms.keys()
if hasSpecularColor:
glUniform4f(uniforms['specularColor'],
1.0, 1.0, 1.0, 1.0)
psychopy.tools.gltools.getAttribLocations(program, builtins=False)
Get attribute names and locations from the specified program object.
This allows you to set vertex attribute pointers by name instead of by index, allowing indices to vary between
shaders. Furthermore, it allows for checking if a shader has a particular attribute.
This function works with both standard and ARB program object variants.
Parameters
• program (int) – Handle of program to retrieve attributes. Must have originated from
a createProgram(), createProgramObjectARB(), glCreateProgram or glCre-
ateProgramObjectARB call.
• builtins (bool, optional) – Include built-in GLSL attributes (eg. gl_Vertex). De-
fault is False.
Returns Attribute names and locations.
Return type dict
Examples
Get the attribute locations in the shader and use them to specify vertex attribute pointers within a vertex array
(VAO) context:
# Get vertex attribute locations in our shader (`myShader`). Within the
# shader we have attributes defined as:
#
(continues on next page)
# create a VAO
vaoId = GLuint()
glGenVertexArrays(1, byref(vaoId))
glBindVertexArray(vaoId)
# bind the buffer storing vertex attribute, here they are interleaved
glBindBuffer(GL.GL_ARRAY_BUFFER, vboId)
# use the attribute index for `pos` to bind the vertex position buffer
attrib = attribLocations['pos']
glVertexAttribPointer(attrib, 3, GL_FLOAT, GL_FALSE, posStride, 0)
glEnableVertexAttribArray(attrib)
attrib = attribLocations['textureCoords']
glVertexAttribPointer(
attrib, 2, GL_FLOAT, GL_FALSE, texCoordStride, texCoordOffset)
glEnableVertexAttribArray(attrib)
attrib = attribLocations['normals']
glVertexAttribPointer(
attrib, 3, GL_FLOAT, GL_FALSE, normStride, normOffset)
glEnableVertexAttribArray(attrib)
glBindVertexArray(0) # unbind
If attribute names are consistent between shaders, you should be able to reuse the same code above, even if
the vertex attribute layout locations differ between shaders. In some cases the shader may not accept one or
more available attributes (eg. texture coordinates) that are available. Instead of writing multiple sub-routines
for building VAOs to handle these permutations, simply check for attribute membership in the data returned by
getAttribLocations:
attribLocations = getAttribLocations(myShader)
hasTexCoords = 'textureCoords' in attribLocations.keys()
psychopy.tools.gltools.createFBO(attachments=())
Create a Framebuffer Object.
Parameters attachments (list or tuple of tuple) – Optional attachments to initialize
the Framebuffer with. Attachments are specified as a list of tuples. Each tuple must con-
tain an attachment point (e.g. GL_COLOR_ATTACHMENT0, GL_DEPTH_ATTACHMENT,
Notes
Examples
# attach images
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, fbo.id)
attach(GL.GL_COLOR_ATTACHMENT0, colorTex)
attach(GL.GL_DEPTH_ATTACHMENT, depthRb)
attach(GL.GL_STENCIL_ATTACHMENT, depthRb)
# or attach(GL.GL_DEPTH_STENCIL_ATTACHMENT, depthRb)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
psychopy.tools.gltools.attach(attachPoint, imageBuffer)
Attach an image to a specified attachment point on the presently bound FBO.
:param attachPoint int: Attachment point for ‘imageBuffer’ (e.g. GL.GL_COLOR_ATTACHMENT0).
:param imageBuffer: Framebuffer-attachable buffer descriptor. :type imageBuffer: TexImage2D or
Renderbuffer
Returns
Return type None
Examples
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, fbo)
attach(GL.GL_COLOR_ATTACHMENT0, colorTex)
attach(GL.GL_DEPTH_STENCIL_ATTACHMENT, depthRb)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, lastBoundFbo)
psychopy.tools.gltools.isComplete()
Check if the currently bound framebuffer is complete.
Returns
Return type obj:‘bool’
psychopy.tools.gltools.deleteFBO(fbo)
Delete a framebuffer.
Returns
Return type obj:‘None’
psychopy.tools.gltools.blitFBO(srcRect, dstRect=None, filter=9729)
Copy a block of pixels between framebuffers via blitting. Read and draw framebuffers must be bound prior to
calling this function. Beware, the scissor box and viewport are changed when this is called to dstRect.
Parameters
• srcRect (list of int) – List specifying the top-left and bottom-right coordinates of the
region to copy from (<X0>, <Y0>, <X1>, <Y1>).
• dstRect (list of int or None) – List specifying the top-left and bottom-right coordi-
nates of the region to copy to (<X0>, <Y0>, <X1>, <Y1>). If None, srcRect is used for
dstRect.
• filter (int) – Interpolation method to use if the image is stretched, default is
GL_LINEAR, but can also be GL_NEAREST.
Returns
Examples
gltools.blitFBO((0,0,800,600), (0,0,800,600))
psychopy.tools.gltools.useFBO(fbo)
Context manager for Framebuffer Object bindings. This function yields the framebuffer name as an integer.
:param fbo int or Framebuffer: OpenGL Framebuffer Object name/ID or descriptor.
Yields int – OpenGL name of the framebuffer bound in the context.
Returns
Return type None
Examples
...
# create a new FBO, but we have no idea what the currently bound FBO is
fbo = createFBO()
Notes
The ‘userData’ field of the returned descriptor is a dictionary that can be used to store arbitrary data associated
with the buffer.
psychopy.tools.gltools.deleteRenderbuffer(renderBuffer)
Free the resources associated with a renderbuffer. This invalidates the renderbuffer’s ID.
Returns
Return type obj:‘None’
psychopy.tools.gltools.createTexImage2D(width, height, target=3553, level=0, internalFor-
mat=32856, pixelFormat=6408, dataType=5126,
data=None, unpackAlignment=4, texParame-
ters=())
Create a 2D texture in video memory. This can only create a single 2D texture with targets GL_TEXTURE_2D
or GL_TEXTURE_RECTANGLE.
Parameters
• width (int) – Texture width in pixels.
• height (int) – Texture height in pixels.
• target (int) – The target texture should only be either GL_TEXTURE_2D or
GL_TEXTURE_RECTANGLE.
• level (int) – LOD number of the texture, should be 0 if GL_TEXTURE_RECTANGLE
is the target.
• internalFormat (int) – Internal format for texture data (e.g. GL_RGBA8,
GL_R11F_G11F_B10F).
• pixelFormat (int) – Pixel data format (e.g. GL_RGBA, GL_DEPTH_STENCIL)
• dataType (int) – Data type for pixel data (e.g. GL_FLOAT, GL_UNSIGNED_BYTE).
• data (ctypes or None) – Ctypes pointer to image data. If None is specified, the texture
will be created but pixel data will be uninitialized.
• unpackAlignment (int) – Alignment requirements of each row in memory. Default is
4.
• texParameters (list of tuple of int) – Optional texture parame-
ters specified as a list of tuples. These values are passed to ‘glTexParam-
eteri’. Each tuple must contain a parameter name and value. For exam-
ple, texParameters=[(GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR),
(GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR)]
Notes
The ‘userData’ field of the returned descriptor is a dictionary that can be used to store arbitrary data associated
with the texture.
Previous textures are unbound after calling ‘createTexImage2D’.
Examples
# empty texture
textureDesc = createTexImage2D(1024, 1024, internalFormat=GL.GL_RGBA8)
# load texture data from an image file using Pillow and NumPy
from PIL import Image
import numpy as np
im = Image.open(imageFile) # 8bpp!
im = im.transpose(Image.FLIP_TOP_BOTTOM) # OpenGL origin is at bottom
im = im.convert("RGBA")
pixelData = np.array(im).ctypes # convert to ctypes!
width = pixelData.shape[1]
height = pixelData.shape[0]
textureDesc = gltools.createTexImage2D(
width,
height,
internalFormat=GL.GL_RGBA,
pixelFormat=GL.GL_RGBA,
dataType=GL.GL_UNSIGNED_BYTE,
data=texture_array.ctypes,
unpackAlignment=1,
texParameters=[(GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR),
(GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR)])
GL.glBindTexture(GL.GL_TEXTURE_2D, textureDesc.id)
Notes
Creating vertex buffers is a computationally expensive operation. Be sure to load all resources before entering
your experiment’s main loop.
Examples
# vertices of a triangle
verts = [ 1.0, 1.0, 0.0, # v0
0.0, -1.0, 0.0, # v1
-1.0, 1.0, 0.0] # v2
GL.glBindBuffer(GL.GL_ARRAY_BUFFER, vboDesc.id)
GL.glVertexPointer(vboDesc.vertexSize, vboDesc.dtype, 0, None)
GL.glEnableClientState(vboDesc.bufferType)
GL.glDrawArrays(GL.GL_TRIANGLES, 0, vboDesc.indices)
GL.glFlush()
psychopy.tools.gltools.createVAO(vertexBuffers, indexBuffer=None)
Create a Vertex Array Object (VAO) with specified Vertex Buffer Objects. VAOs store buffer binding states,
reducing CPU overhead when drawing objects with vertex data stored in VBOs.
Parameters
• vertexBuffers (list of tuple) – Specify vertex attributes VBO descriptors apply
to.
• indexBuffer (list of int, optional) – Index array of elements. If provided, an element
array is created from the array. The returned descriptor will have isIndexed=True. This
requires the VAO be drawn with glDrawElements instead of glDrawArrays.
Returns A descriptor with vertex array information.
Return type VertexArrayObject
Examples
drawVAO(vaoDesc, GL.GL_TRIANGLES)
Examples
psychopy.tools.gltools.deleteVBO(vbo)
Delete a Vertex Buffer Object (VBO).
Returns
Return type obj:‘None’
psychopy.tools.gltools.deleteVAO(vao)
Delete a Vertex Array Object (VAO). This does not delete array buffers bound to the VAO.
Returns
Return type obj:‘None’
psychopy.tools.gltools.createMaterial(params=(), textures=(), face=1032)
Create a new material.
Parameters params (list of tuple, optional) – List of material modes and values. Each
mode is assigned a value as (mode, color). Modes can be GL_AMBIENT, GL_DIFFUSE,
GL_SPECULAR, GL_EMISSION, GL_SHININESS or GL_AMBIENT_AND_DIFFUSE.
Colors must be a tuple of 4 floats which specify reflectance values for each RGBA compo-
nent. The value of GL_SHININESS should be a single float. If no values are specified, an
empty material will be created.
:param textures list of tuple, optional: List of texture units and TexImage2D descriptors. These will be written
to the ‘textures’ field of the returned descriptor. For example, [(GL.GL_TEXTURE0,
texDesc0), (GL.GL_TEXTURE1, texDesc1)]. The number of texture units per-material is
GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS.
Parameters face (int, optional) – Faces to apply material to. Values can
be GL_FRONT_AND_BACK, GL_FRONT and GL_BACK. The default is
GL_FRONT_AND_BACK.
Returns A descriptor with material properties.
Return type Material
Examples
useMaterial(gold)
drawVAO( ... ) # all meshes will be gold
useMaterial(None) # turn off material when done
Create a red plastic material, but define reflectance and shine later:
red_plastic = createMaterial()
psychopy.tools.gltools.useMaterial(material, useTextures=True)
Use a material for proceeding vertex draws.
Parameters
• material (Material or None) – Material descriptor to use. Default material properties
are set if None is specified. This is equivalent to disabling materials.
• useTextures (bool) – Enable textures. Textures specified in a material descriptor’s
‘texture’ attribute will be bound and their respective texture units will be enabled. Note,
when disabling materials, the value of useTextures must match the previous call. If there are
no textures attached to the material, useTexture will be silently ignored.
Returns
Return type None
Notes
1. If a material mode has a value of None, a color with all components 0.0 will be assigned.
2. Material colors and shininess values are accessible from shader programs after calling ‘useMate-
rial’. Values can be accessed via built-in ‘gl_FrontMaterial’ and ‘gl_BackMaterial’ structures (e.g.
gl_FrontMaterial.diffuse).
Examples
useMaterial(metalMaterials.gold)
drawVAO( ... ) # all meshes drawn will be gold
useMaterial(None) # turn off material when done
psychopy.tools.gltools.createLight(params=())
Create a point light source.
psychopy.tools.gltools.useLights(lights, setupOnly=False)
Use specified lights in successive rendering operations. All lights will be transformed using the present mod-
elview matrix.
Parameters
• lights (List of Light or None) – Descriptor of a light source. If None, lighting is
disabled.
• setupOnly (bool, optional) – Do not enable lighting or lights. Specify True if lighting
is being computed via fragment shaders.
psychopy.tools.gltools.setAmbientLight(color)
Set the global ambient lighting for the scene when lighting is enabled. This is equiva-
lent to GL.glLightModelfv(GL.GL_LIGHT_MODEL_AMBIENT, color) and does not contribute to the
GL_MAX_LIGHTS limit.
Parameters color (tuple) – Ambient lighting RGBA intensity for the whole scene.
Notes
If unset, the default value is (0.2, 0.2, 0.2, 1.0) when GL_LIGHTING is enabled.
psychopy.tools.gltools.loadObjFile(objFile)
Load a Wavefront OBJ file (*.obj).
Parameters objFile (str) – Path to the *.OBJ file to load.
Returns
Return type WavefrontObjModel
Notes
1. This importer should work fine for most sanely generated files. Export your model with Blender for best
results, even if you used some other package to create it.
2. The model must be triangulated, quad faces are not supported.
Examples
# lights
useLights(light0)
psychopy.tools.gltools.loadMtlFile(mtlFilePath, texParameters=None)
Load a material library (*.mtl).
psychopy.tools.gltools.getIntegerv(parName)
Get a single integer parameter value, return it as a Python integer.
Parameters pName (:obj:‘int’) – OpenGL property enum to query (e.g. GL_MAJOR_VERSION).
Returns
Return type int
psychopy.tools.gltools.getFloatv(parName)
Get a single float parameter value, return it as a Python float.
Parameters pName (:obj:‘float’) – OpenGL property enum to query.
Returns
Return type int
psychopy.tools.gltools.getString(parName)
Get a single string parameter value, return it as a Python UTF-8 string.
Parameters pName (:obj:‘int’) – OpenGL property enum to query (e.g. GL_VENDOR).
Returns
Return type str
psychopy.tools.gltools.getOpenGLInfo()
Get general information about the OpenGL implementation on this machine. This should provide a consistent
means of doing so regardless of the OpenGL interface we are using.
Returns are dictionary with the following fields:
vendor, renderer, version, majorVersion, minorVersion, doubleBuffer, maxTextureSize, stereo,
maxSamples, extensions
Supported extensions are returned as a list in the ‘extensions’ field. You can check if a platform supports an
extension by checking the membership of the extension name in that list.
Returns
Return type OpenGLInfo
Examples
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, fbo.id)
attach(GL.GL_COLOR_ATTACHMENT0, colorTex)
attach(GL.GL_DEPTH_ATTACHMENT, depthRb)
attach(GL.GL_STENCIL_ATTACHMENT, depthRb)
# or attach(GL.GL_DEPTH_STENCIL_ATTACHMENT, depthRb)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, 0)
Attach FBO images using a context. This automatically returns to the previous FBO binding state when complete.
This is useful if you don’t know the current binding state:
with useFBO(fbo):
attach(GL.GL_COLOR_ATTACHMENT0, colorTex)
attach(GL.GL_DEPTH_ATTACHMENT, depthRb)
attach(GL.GL_STENCIL_ATTACHMENT, depthRb)
GL.glBindFramebuffer(GL.GL_FRAMEBUFFER, fb.id)
Deleting a framebuffer when done with it. This invalidates the framebuffer’s ID and makes it available for use:
deleteFBO(fbo)
8.20.5 psychopy.tools.imagetools
Function details
psychopy.tools.imagetools.array2image(a)
Takes an array and returns an image object (PIL)
psychopy.tools.imagetools.image2array(im)
Takes an image object (PIL) and returns a numpy array
psychopy.tools.imagetools.makeImageAuto(inarray)
Combines float_uint8 and image2array operations ie. scales a numeric array from -1:1 to 0:255 and converts to
PIL image format
8.20.6 psychopy.tools.mathtools
Assorted math functions for working with vectors, matrices, and quaternions. These functions are intended to pro-
vide basic support for common mathematical operations associated with displaying stimuli (e.g. animation, posing,
rendering, etc.)
For tools related to view transformations, see viewtools.
Most functions listed here are very fast, however they are optimized to work on arrays of values (vectorization).
Calling functions repeatedly (for instance within a loop), should be avoided as the CPU overhead associated with each
function call (not to mention the loop itself) can be considerable.
For example, one may want to normalize a bunch of randomly generated vectors by calling normalize() on each
row:
# don't do this!
for i in range(1000):
vn[i, :] = normalize(v[i, :])
The same operation is completed in considerably less time by passing the whole array to the function like so:
Specifying an output array to out will improve performance by reducing overhead associated with allocating memory
to store the result (functions do this automatically if out is not provided). However, out should only be provided if the
output array is reused multiple times. Furthermore, the function still returns a value if out is provided, but the returned
value is a reference to out, not a copy of it. If out is not provided, the function will return the result with a freshly
allocated array.
Data Types
Sub-routines used by the functions here will perform arithmetic using 64-bit floating-point precision unless otherwise
specified via the dtype argument. This functionality is helpful in certain applications where input and output arrays
demand a specific type (eg. when working with data passed to and from OpenGL functions).
If a dtype is specified, input arguments will be coerced to match that type and all floating-point arithmetic will use the
precision of the type. If input arrays have the same type as dtype, they will automatically pass-through without being
recast as a different type. As a performance consideration, all input arguments should have matching types and dtype
set accordingly.
Most functions have an out argument, where one can specify an array to write values to. The value of dtype is ignored
if out is provided, and all input arrays will be converted to match the dtype of out (if not already). This ensures that
the type of the destination array is used.
Various math functions for working with vectors, matrices, and quaternions.
Overview
Details
‘float32’ or ‘float64’. If None is specified, the data type of out is used. If out is not provided,
‘float64’ is used by default.
Returns Length of vector v.
Return type float or ndarray
psychopy.tools.mathtools.normalize(v, out=None, dtype=None)
Normalize a vector or quaternion.
v [array_like] Vector to normalize, can be Nx2, Nx3, or Nx4. If a 2D array is specified, rows are treated as
separate vectors.
out [ndarray, optional] Optional output array. Must be same shape and dtype as the expected output if out was
not specified.
dtype [dtype or str, optional] Data type for arrays, can either be ‘float32’ or ‘float64’. If None is specified, the
data type is inferred by out. If out is not provided, the default is ‘float64’.
Notes
Examples
Normalize a vector:
The normalize function is vectorized. It’s considerably faster to normalize large arrays of vectors than to call
normalize separately for each one:
# don't do this!
for i in range(1000):
vn[i, :] = normalize(v[i, :])
• dtype (dtype or str, optional) – Data type for arrays, can either be ‘float32’ or
‘float64’. If None is specified, the data type is inferred by out. If out is not provided, the
default is ‘float64’.
Returns Orthogonalized vector v relative to normal vector n.
Return type ndarray
Warning: If v and n are the same, the direction of the perpendicular vector is indeterminate. The resulting
vector is degenerate (all zeros).
Parameters
• v1 (v0,) – Vector(s) to compute dot products of (e.g. [x, y, z]). v0 must have equal or
fewer dimensions than v1.
• out (ndarray, optional) – Optional output array. Must be same shape and dtype as
the expected output if out was not specified.
• dtype (dtype or str, optional) – Data type for arrays, can either be ‘float32’ or
‘float64’. If None is specified, the data type is inferred by out. If out is not provided, the
default is ‘float64’.
Returns Dot product(s) of v0 and v1.
Return type ndarray
Parameters
• v1 (v0,) – Vector(s) in form [x, y, z] or [x, y, z, 1].
• out (ndarray, optional) – Optional output array. Must be same shape and dtype as
the expected output if out was not specified.
• dtype (dtype or str, optional) – Data type for arrays, can either be ‘float32’ or
‘float64’. If None is specified, the data type is inferred by out. If out is not provided, the
default is ‘float64’.
Returns Cross product of v0 and v1.
Return type ndarray
Notes
• If input vectors are 4D, the last value of cross product vectors is always set to one.
• If input vectors v0 and v1 are Nx3 and out is Nx4, the cross product is computed and the last column of
out is filled with ones.
Examples
a = normalize([1, 2, 3])
b = normalize([3, 2, 1])
c = cross(a, b)
If input arguments are 2D, the function returns the cross products of corresponding rows:
If a 1D and 2D vector are specified, the cross product of each row of the 2D array and the 1D array is returned:
Examples
Parameters
• v1 (v0,) – Vectors to compute the distance between.
• out (ndarray, optional) – Optional output array. Must be same shape and dtype as
the expected output if out was not specified.
• dtype (dtype or str, optional) – Data type for arrays, can either be ‘float32’ or
‘float64’. If None is specified, the data type is inferred by out. If out is not provided, the
default is ‘float64’.
Returns Distance between vectors v0 and v1.
Return type ndarray
Parameters
• v (array_like) – Direction vector [x, y, z].
• point (array_like) – Point(s) to compute angle to from vector v.
• degrees (bool, optional) – Return the resulting angles in degrees. If False, angles
will be returned in radians. Default is True.
• out (ndarray, optional) – Optional output array. Must be same shape and dtype as
the expected output if out was not specified.
• dtype (dtype or str, optional) – Data type for arrays, can either be ‘float32’ or
‘float64’. If None is specified, the data type is inferred by out. If out is not provided, the
default is ‘float64’.
Returns Distance between vectors v0 and v1.
Return type ndarray
• dtype (dtype or str, optional) – Data type for arrays, can either be ‘float32’ or
‘float64’. If None is specified, the data type is inferred by out. If out is not provided, the
default is ‘float64’.
Returns Surface normal of triangle tri.
Return type ndarray
Examples
Find the normals for multiple triangles, and put results in a pre-allocated array:
vertices = [[[-1., 0., 0.], [0., 1., 0.], [1, 0, 0]], # 2x3x3
[[1., 0., 0.], [0., 1., 0.], [-1, 0, 0]]]
normals = np.zeros((2, 3)) # normals from two triangles triangles
surfaceNormal(vertices, out=normals)
Examples
Computing the bitangents for two triangles from vertex and texture coordinates (UVs):
Examples
normals = surfaceNormal(vertices)
tangents = surfaceTangent(vertices, uv)
bitangents = cross(normals, tangents) # or use `surfaceBitangent`
Orthogonalize a surface tangent with a vertex normal vector to get the vertex tangent and bitangent vectors:
Ensure computed vectors have the same handedness, if not, flip the tangent vector (important for applications
like normal mapping):
Examples
Compute a vertex normal from the face normals of the triangles it belongs to:
normals = [[1., 0., 0.], [0., 1., 0.]] # adjacent face normals
vertexNorm = vertexNormal(normals)
Parameters
• q0 (array_like) – Initial quaternion in form [x, y, z, w] where w is real and x, y, z are
imaginary components.
• q1 (array_like) – Final quaternion in form [x, y, z, w] where w is real and x, y, z are
imaginary components.
Examples
q0 = quatFromAxisAngle(90.0, degrees=True)
q1 = quatFromAxisAngle(-90.0, degrees=True)
# halfway between 90 and -90 is 0.0 or quaternion [0. 0. 0. 1.]
qr = slerp(q0, q1, 0.5)
Returns Axis and angle of quaternion in form ([ax, ay, az], angle). If degrees is True, the angle
returned is in degrees, radians if False.
Return type tuple
Examples
Examples
• squared (bool, optional) – If True return the squared magnitude. If you are just
checking if a quaternion is normalized, the squared magnitude will suffice to avoid the
square root operation.
• out (ndarray, optional) – Optional output array. Must be same shape and dtype as
the expected output if out was not specified.
• dtype (dtype or str, optional) – Data type for arrays, can either be ‘float32’ or
‘float64’. If None is specified, the data type is inferred by out. If out is not provided, the
default is ‘float64’.
Returns Magnitude of quaternion q.
Return type float or ndarray
psychopy.tools.mathtools.multQuat(q0, q1, out=None, dtype=None)
Multiply quaternion q0 and q1.
The orientation of the returned quaternion is the combination of the input quaternions.
Parameters
• q1 (q0,) – Quaternions to multiply in form [x, y, z, w] where w is real and x, y, z are
imaginary components. If 2D (Nx4) arrays are specified, quaternions are multiplied row-
wise between each array.
• out (ndarray, optional) – Optional output array. Must be same shape and dtype as
the expected output if out was not specified.
• dtype (dtype or str, optional) – Data type for arrays, can either be ‘float32’ or
‘float64’. If None is specified, the data type is inferred by out. If out is not provided, the
default is ‘float64’.
Returns Combined orientations of q0 amd q1.
Return type ndarray
Notes
Examples
• out (ndarray, optional) – Optional output array. Must be same shape and dtype as
the expected output if out was not specified.
• dtype (dtype or str, optional) – Data type for arrays, can either be ‘float32’ or
‘float64’. If None is specified, the data type is inferred by out. If out is not provided, the
default is ‘float64’.
Returns Inverse of quaternion q.
Return type ndarray
Examples
Show that multiplying a quaternion by its inverse returns an identity quaternion where [x=0, y=0, z=0, w=1]:
angle = 90.0
axis = [0., 0., -1.]
q = quatFromAxisAngle(axis, angle, degrees=True)
qinv = invertQuat(q)
qr = multQuat(q, qinv)
qi = np.array([0., 0., 0., 1.]) # identity quaternion
print(np.allclose(qi, qr)) # True
Notes
Examples
Specifying an array to q where each row is a quaternion transforms points in corresponding rows of points:
• dtype (dtype or str, optional) – Data type for arrays, can either be ‘float32’ or
‘float64’. If None is specified, the data type is inferred by out. If out is not provided, the
default is ‘float64’.
Returns 4x4 scaling matrix in row-major order. Will be the same array as out if specified, if not, a
new array will be allocated.
Return type ndarray
Notes
The data types of input matrices are coerced to match that of out or dtype if out is None. For performance
reasons, it is best that all arrays passed to this function have matching data types.
Parameters
• matrices (list or tuple) – List of matrices to concatenate. All matrices must be
4x4.
• out (ndarray, optional) – Optional output array. Must be same shape and dtype as
the expected output if out was not specified.
• dtype (dtype or str, optional) – Data type for arrays, can either be ‘float32’ or
‘float64’. If None is specified, the data type is inferred by out. If out is not provided, the
default is ‘float64’.
Returns Concatenation of input matrices as a 4x4 matrix in row-major order.
Return type ndarray
Examples
Create an SRT (scale, rotate, and translate) matrix to convert model-space coordinates to world-space:
Create a model-view matrix from a world-space pose represented by an orientation (quaternion) and position
(vector). The resulting matrix will transform model-space coordinates to eye-space:
# modelview matrix
MV = concatenate([M, V])
You can put the created matrix in the OpenGL matrix stack as shown below. Note that the matrix must have a
32-bit floating-point data type and needs to be loaded transposed since OpenGL takes matrices in column-major
order:
GL.glMatrixMode(GL.GL_MODELVIEW)
# pyglet
MV = np.asarray(MV, dtype='float32') # must be 32-bit float!
ptrMV = MV.ctypes.data_as(ctypes.POINTER(ctypes.c_float))
GL.glLoadTransposeMatrixf(ptrMV)
# PyOpenGL
MV = np.asarray(MV, dtype='float32')
GL.glLoadTransposeMatrixf(MV)
Furthermore, you can convert a point from model-space to homogeneous clip-space by concatenating the pro-
jection, view, and model matrices:
Examples
Extract the 3x3 rotation sub-matrix from a 4x4 matrix and apply it to points. Here the result is written to a
pre-allocated array:
Examples
You can get the same results as the previous example using a matrix by doing the following:
If you are defining transformations with quaternions and coordinates, you can skip the costly matrix creation
process by using transform.
Notes
• In performance tests, applyMatrix is noticeably faster than transform for very large arrays, however this is
only true if you are applying the same transformation to all points.
• If the input arrays for points or pos is Nx4, the last column is ignored.
8.20.7 psychopy.tools.monitorunittools
convertToPix(vertices, pos, units, win) Takes vertices and position, combines and converts to
pixels from any unit
cm2deg(cm, monitor[, correctFlat]) Convert size in cm to size in degrees for a given Monitor
object
cm2pix(cm, monitor) Convert size in cm to size in pixels for a given Monitor
object.
deg2cm(degrees, monitor[, correctFlat]) Convert size in degrees to size in pixels for a given Mon-
itor object.
deg2pix(degrees, monitor[, correctFlat]) Convert size in degrees to size in pixels for a given Mon-
itor object
pix2cm(pixels, monitor) Convert size in pixels to size in cm for a given Monitor
object
pix2deg(pixels, monitor[, correctFlat]) Convert size in pixels to size in degrees for a given Mon-
itor object
Function details
8.20.8 psychopy.tools.plottools
8.20.9 psychopy.tools.rifttools
8.20.10 psychopy.tools.typetools
psychopy.tools.typetools.uint8_float(inarray)
Converts arrays, lists, tuples and UINTs ranging 0:255 into an array of floats ranging -1:1
>>> uint8_float(0)
-1.0
>>> uint8_float(128)
0.0
psychopy.tools.typetools.float_uint16(inarray)
Converts arrays, lists, tuples and floats ranging -1:1 into an array of Uint16s ranging 0:2^16
>>> float_uint16(-1)
0
>>> float_uint16(0)
32768
8.20.11 psychopy.tools.unittools
Parameters
• x (array_like) – Input array in degrees.
• out (ndarray, None, or tuple of ndarray and None, optional) – A
location into which the result is stored. If provided, it must have a shape that the inputs
broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible
only as a keyword argument) must have length equal to the number of outputs.
• where (array_like, optional) – Values of True indicate to calculate the ufunc at
that position, values of False indicate to leave the value in the output alone.
• **kwargs – For other keyword-only arguments, see the ufunc docs.
Returns y – The corresponding radian values. This is a scalar if x is a scalar.
Return type ndarray
See also:
Examples
Examples
8.20.12 psychopy.tools.viewtools
Tools for working with view projections for 2- and 3-D rendering.
Function details
• eyeOffset (float) – Half the inter-ocular separation (i.e. the horizontal distance be-
tween the nose and center of the pupil) in meters. If eyeOffset is 0.0, a symmetric frustum
is returned.
• nearClip (float) – Distance to the near clipping plane in meters from the viewer.
Should be at least less than scrDist.
• farClip (float) – Distance to the far clipping plane from the viewer in meters. Must be
>nearClip.
Returns Namedtuple with frustum parameters. Can be directly passed to glFrustum (e.g. glFrus-
tum(*f)).
Return type Frustum
Notes
• The view point must be transformed for objects to appear correctly. Offsets in the X-direction must be
applied +/- eyeOffset to account for inter-ocular separation. A transformation in the Z-direction must be
applied to accountfor screen distance. These offsets MUST be applied to the GL_MODELVIEW matrix,
not the GL_PROJECTION matrix! Doing so may break lighting calculations.
Examples
# make sure your view matrix accounts for the screen distance and eye offsets!
psychopy.tools.viewtools.generalizedPerspectiveProjection(posBottomLeft, posBot-
tomRight, posTopLeft,
eyePos, nearClip=0.01,
farClip=100.0)
Generalized derivation of projection and view matrices based on the physical configuration of the display system.
This implementation is based on Robert Kooima’s ‘Generalized Perspective Projection’ method1 .
Parameters
• posBottomLeft (list of float or ndarray) – Bottom-left 3D coordinate of
the screen in meters.
• posBottomRight (list of float or ndarray) – Bottom-right 3D coordinate
of the screen in meters.
• posTopLeft (list of float or ndarray) – Top-left 3D coordinate of the screen
in meters.
• eyePos (list of float or ndarray) – Coordinate of the eye in meters.
• nearClip (float) – Near clipping plane distance from viewer in meters.
• farClip (float) – Far clipping plane distance from viewer in meters.
Returns The 4x4 projection and view matrix.
Return type tuple
See also:
Notes
• The resulting projection frustums are off-axis relative to the center of the display.
• The returned matrices are row-major. Values are floats with 32-bits of precision stored as a contiguous
(C-order) array.
References
Examples
Notes
• The returned matrix is row-major. Values are floats with 32-bits of precision stored as a contiguous (C-
order) array.
Notes
• The returned matrix is row-major. Values are floats with 32-bits of precision stored as a contiguous (C-
order) array.
Notes
• The returned matrix is row-major. Values are floats with 32-bits of precision stored as a contiguous (C-
order) array.
Notes
• The point is not visible, falling outside of the viewing frustum, if the returned coordinates fall outside of
-1 and 1 along any dimension.
• In the rare instance the point falls directly on the eye in world space where the frustum converges to a point
(singularity), the divisor will be zero during perspective division. To avoid this, the divisor is ‘bumped’ to
1e-5.
• This function assumes the display area is rectilinear. Any distortion or warping applied in normalized
device or viewport space is not considered.
Examples
8.21.1 Overview
Hardware voice-keys are used to detect and signal acoustic properties in real time, e.g., the onset of a spoken word
in word-naming studies. PsychoPy provides two virtual voice-keys, one for detecting vocal onsets and one for vocal
offsets.
All PsychoPy voice-keys can take their input from a file or from a microphone. Event detection is typically quite
similar is both cases.
The base class is very general, and is best thought of as providing a toolkit for developing a wide range of custom
voice-keys. It would be possible to develop a set of voice-keys, each optimized for detecting different initial phonemes.
Band-pass filtered data and zero-crossing counts are computed in real-time every 2ms.
8.21.2 Voice-Keys
_set_defaults()
Set remaining defaults, initialize lists to hold summary stats
_set_signaler()
Set the signaler to be called by trip()
_set_source()
Data source: file_in, array, or microphone
_set_tables()
Set up the pyo tables (allocate memory, etc).
One source -> three pyo tables: chunk=short, whole=all, baseline. triggers fill tables from self._source;
make triggers in .start()
detect()
Trip if recent audio power is greater than the baseline.
join(sec=None)
Sleep for sec or until end-of-input, and then call stop().
save(ftype=”, dtype=’int16’)
Save new data to file, return the size of the saved file (or None).
The file format is inferred from the filename extension, e.g., flac. This will be overridden by the ftype if
one is provided; defaults to wav if nothing else seems reasonable. The optional dtype (e.g., int16) can be
any of the sample types supported by pyo.
slippage
Diagnostic – Ratio of the actual (elapsed) time to the ideal time.
Ideal ratio = 1 = sample-perfect acquisition of msPerChunk, without any gaps between or within chunks.
1. / slippage is the proportion of samples contributing to chunk stats.
start(silent=False)
Start reading and processing audio data from a file or microphone.
started
Boolean property, whether .start() has been called.
stop()
Stop a voice-key in progress.
Ends and saves the recording if using microphone input.
wait_for_event(plus=0)
Start, join, and wait until the voice-key trips, or it times out.
Optionally wait for some extra time, plus, before calling stop().
class psychopy.voicekey.OffsetVoiceKey(sec=10, file_out=”, file_in=”, delay=0.3,
**kwargs)
Class to detect the offset of a single-word utterance.
Record and ends the recording after speech offset. When the voice key trips, the best voice-offset RT estimate
is saved as self.event_offset, in seconds.
Parameters
sec: duration of recording in the absence of speech or other sounds.
delay: extra time to record after speech offset, default 0.3s.
The same methods are available as for class OnsetVoiceKey.
Several helper functions are available for converting and saving sound data from several data formats (numpy ar-
rays, pyo tables) and file formats. All file formats that pyo supports are available, including wav, flac for lossless
compression. mp3 format is not supported (but you can convert to .wav using another utility).
psychopy.voicekey.samples_from_table(table, start=0, stop=-1, rate=44100)
Return samples as a np.array read from a pyo table.
A (start, stop) selection in seconds may require a non-default rate.
psychopy.voicekey.table_from_samples(samples, start=0, stop=-1, rate=44100)
Return a pyo DataTable constructed from samples.
A (start, stop) selection in seconds may require a non-default rate.
psychopy.voicekey.table_from_file(file_in, start=0, stop=-1)
Read data from files, any pyo format, returns (rate, pyo SndTable)
psychopy.voicekey.samples_from_file(file_in, start=0, stop=-1)
Read data from files, returns tuple (rate, np.array(.float64))
psychopy.voicekey.samples_to_file(samples, rate, file_out, fmt=”, dtype=’int16’)
Write data to file, using requested format or infer from file .ext.
Only integer rate values are supported.
See http://ajaxsoundstudio.com/pyodoc/api/functions/sndfile.html
psychopy.voicekey.table_to_file(table, file_out, fmt=”, dtype=’int16’)
Write data to file, using requested format or infer from file .ext.
psychopy.web.haveInternetAccess(forceCheck=False)
Detect active internet connection or fail quickly.
If forceCheck is False, will rely on a cached value if possible.
psychopy.web.requireInternetAccess(forceCheck=False)
Checks for access to the internet, raise error if no access.
psychopy.web.setupProxy(log=True)
Set up the urllib proxy if possible.
The function will use the following methods in order to try and determine proxies:
1. standard urllib.request.urlopen (which will use any statically-defined http-proxy settings)
2. previous stored proxy address (in prefs)
3. proxy.pac files if these have been added to system settings
4. auto-detect proxy settings (WPAD technology)
Further information:
NINE
TROUBLESHOOTING
Regrettably, PsychoPy is not bug-free. Running on all possible hardware and all platforms is a big ask. That said, a
huge number of bugs have been resolved by the fact that there are literally 1000s of people using the software that
have contributed either bug reports and/or fixes.
Below are some of the more common problems and their workarounds, as well as advice on how to get further help.
You may find that you try to launch the PsychoPy application, the splash screen appears and then goes away and
nothing more happens. What this means is that an error has occurred during startup itself.
Commonly, the problem is that a preferences file is somehow corrupt. To fix that see Cleaning preferences and app
data, below.
If resetting the preferences files doesn’t help then we need to get to an error message in order to work out why the
application isn’t starting. The way to get that message depends on the platform (see below).
Windows users (starting from the Command Prompt):
1. Did you get an error message that “This application failed to start because the application configuration is
incorrect. Reinstalling the application may fix the problem”? If so that indicates you need to update your .NET
installation to SP1 .
2. open a Command Prompt (terminal):
(a) go to the Windows Start menu
(b) select Run. . . and type in cmd <Return>
3. paste the following into that window (Ctrl-V doesn’t work in Cmd.exe but you can right-click and select Paste):
4. when you hit <return> you will hopefully get a moderately useful error message that you can Contribute to the
Forum (mailing list)
Mac users:
1. open the Console app (open spotlight and type console)
2. if there are a huge number of messages there you might find it easiest to clear them (the brush icon) and
then start PsychoPy again to generate a new set of messages
409
PsychoPy - Psychology software for Python, Release 3.2.0
An error message may have appeared in a dialog box that is hidden (look to see if you have other open windows
somewhere).
An error message may have been generated that was sent to output of the Coder view:
1. go to the Coder view (from the Builder>View menu if not visible)
2. if there is no Output panel at the bottom of the window, go to the View menu and select Output
3. try running your experiment again and see if an error message appears in this Output view
If you still don’t get an error message but the application still doesn’t start then manually turn off the viewing of
the Output (as below) and try the above again.
Very occasionally an error will occur that crashes the application after the application has opened the Coder Output
window. In this case the error message is still not sent to the console or command prompt.
To turn off the Output view so that error messages are sent to the command prompt/terminal on startup, open your
appData.cfg file (see Cleaning preferences and app data), find the entry:
[coder]
showOutput = True
PsychoPy comes with all the source code included. You may not think you’re much of a programmer, but have a go at
reading the code. You might find you understand more of it than you think!
To have a look at the source code do one of the following:
• when you get an error message in the Coder click on the hyperlinked error lines to see the relevant code
• on Windows
– go to <location of PsychoPy app>\Lib\site-packages\psychopy
– have a look at some of the files there
• on Mac
– right click the PsychoPy app and select Show Package Contents
– navigate to Contents/Resources/lib/pythonX.X/psychopy
Every time you shut down PsychoPy (by normal means) your current preferences and the state of the application (the
location and state of the windows) are saved to disk. If PsychoPy is crashing during startup you may need to edit those
files or delete them completely.
The exact location of those files varies by machine but on windows it will be something like %APPDATA%psychopy3
and on Linux/MacOS it will be something like ~/.psychopy3. You can find it running this in the commandline (if you
have multiple Python installations then make sure you change python to the appropriate one for PsychoPy:
Within that folder you will find userPrefs.cfg and appData.cfg. The files are simple text, which you should be able to
edit in any text editor.
If the problem is that you have a corrupt experiment file or script that is trying and failing to load on startup, you
could simply delete the appData.cfg file. Please also Contribute to the Forum (mailing list) a copy of the file that isn’t
working so that the underlying cause of the problem can be investigated (google first to see if it’s a known issue).
TEN
RECIPES (“HOW-TO”S)
Below are various tips/tricks/recipes/how-tos for PsychoPy. They involve something that is a little more involved than
you would find in FAQs, but too specific for the manual as such (should they be there?).
You might find that you want to add some additional Python module/package to your Standalone version of PsychoPy.
To do this you need to:
• download a copy of the package (make sure it’s for Python 2.7 on your particular platform)
• unzip/open it into a folder
• add that folder to the path of PsychoPy by one of the methods below
Avoid adding the entire path (e.g. the site-packages folder) of separate installation of Python, because that may contain
conflicting copies of modules that PsychoPy is also providing.
As of version 1.70.00 you can do this using the PsychoPy preferences/general. There you will find a preference for
paths which can be set to a list of strings e.g. [‘/Users/jwp/code’, ‘~/code/thirdParty’]
These only get added to the Python path when you import psychopy (or one of the psychopy packages) in your script.
An alternative is to add a file into the site-packages folder of your application. This file should be pure text and have
the extension .pth to indicate to Python that it adds to the path.
On win32 the site-packages folder will be something like:
C:/Program Files/PsychoPy2/lib/site-packages
On macOS you need to right-click the application icon, select ‘Show Package Contents’ and then navigate down to
Contents/Resources/lib/python2.6. Put your .pth file here, next to the various libraries.
The advantage of this method is that you don’t need to do the import psychopy step. The downside is that when you
update PsychoPy to a new major release you’ll need to repeat this step (patch updates won’t affect it though).
413
PsychoPy - Psychology software for Python, Release 3.2.0
10.2 Animation
would cycle the opacity linearly from 0 to 1.0 over 2s (it will then continue incrementing but it doesn’t seem to matter
if the value exceeds 1.0).
Using a code component might allow you to do more sophisticated things (e.g. fade in for a while, hold it, then fade
out). Or more simply, you just create multiple successive Patch stimulus components, each with a different equation
or value in the opacity field depending on their place in the timeline.
A lot of people ask how they can build a standalone application from their Python script. Usually this is because they
have a collaborator and want to just send them the experiment.
In general this is not advisable - the resulting bundle of files (single file on macOS) will be on the order of 100Mb
and will not provide the end user with any of the options that they might need to control the task (for example,
Monitor Center won’t be provided so they can’t to calibrate their monitor). A better approach in general is to get your
collaborator to install the Standalone PsychoPy on their own machine, open your script and press run. (You don’t send
a copy of Microsoft Word when you send someone a document - you expect the reader to install it themself and open
the document).
Nonetheless, it is technically possible to create exe files on Windows, and Ricky Savjani (savjani at bcm.edu) has
kindly provided the following instructions for how to do it. A similar process might be possible on macOS using
py2app - if you’ve done that then feel free to contribute the necessary script or instructions.
Instructions:
1. Download and install py2exe (http://www.py2exe.org/)
2. Develop your PsychoPy script as normal
3. Copy this setup.py file into the same directory as your script
4. Change the Name of progName variable in this file to the Name of your desired executable program name
5. Use cmd (or bash, terminal, etc.) and run the following in the directory of your the two files: python
setup.py py2exe
6. Open the ‘dist’ directory and run your executable
A example setup.py script:
# Created 8-09-2011
# Ricky Savjani
# (savjani at bcm.edu)
f1 = 'C:\\Program Files\\PsychoPy2\\Lib\\site-packages\\PsychoPy-1.65.00-py2.6.
˓→egg\\psychopy\\preferences\\' + files
preference_files.append(f1)
# f1 = 'C:\\Program Files\\PsychoPy2\\Lib\\site-packages\\PsychoPy-1.65.00-py2.6.
˓→egg\\psychopy\\app\\' + files
# app_files.append(f1)
If you’re using the Builder then the way to provide feedback is with a Code Component to generate an appropriate
message (and then a text to present that message). PsychoPy will be keeping track of various aspects of the stimuli
and responses for you throughout the experiment and the key is knowing where to find those.
The following examples assume you have a Loop called trials, containing a Routine with a Keyboard Component
called key_resp. Obviously these need to be adapted in the code below to fit your experiment.
Note: The following generate strings use python ‘formatted strings’. These are very powerful and flexible but a little
strange when you aren’t used to them (they contain odd characters like %.2f). See Generating formatted strings for
more info.
This is actually demonstrated in the demo, ExtendedStroop (in the Builder>demos menu, unpack the demos and then
look in the menu again. tada!)
If you have a Keyboard Component called key_resp then, after every trial you will have the following variables:
To create your msg, insert the following into the ‘start experiment‘ section of the Code Component:
and then insert the following into the Begin Routine section (this will get run every repeat of the routine):
if not key_resp.keys :
msg="Failed to respond"
elif resp.corr:#stored on last run routine
msg="Correct! RT=%.3f" %(resp.rt)
else:
msg="Oops! That was wrong"
In this case the feedback routine would need to come after the loop (the block of trials) and the message needs to use
the stored data from the loop rather than the key_resp directly. Accessing the data from a loop is not well documented
but totally possible.
In this case, to get all the keys pressed in a numpy array:
If you used the ‘Store Correct’ feature of the Keyboard Component (and told psychopy what the correct answer was)
you will also have a variable:
#numpy array storing whether each response was correct (1) or not (0)
trials.data['resp.corr']
So, to create your msg, insert the following into the ‘start experiment‘ section of the Code Component:
and then insert the following into the Begin Routine section (this will get run every repeat of the routine):
Using one of the above methods to generate your msg in a Code Component, you then need to present it to the
participant by adding a text to your feedback Routine and setting its text to $msg.
Warning: The Text Component needs to be below the Code Component in the Routine (because it needs to be
updated after the code has been run) and it needs to set every repeat.
People often want to terminate their Loops before they reach the designated number of trials based on subjects’
responses. For example, you might want to use a Loop to repeat a sequence of images that you want to continue until
a key is pressed, or use it to continue a training period, until a criterion performance is reached.
To do this you need a Code Component inserted into your routine. All loops have an attribute called finished which is
set to True or False (in Python these are really just other names for 1 and 0). This finished property gets checked on
each pass through the loop. So the key piece of code to end a loop called trials is simply:
Of course you need to check the condition for that with some form of if statement.
Example 1: You have a change-blindness study in which a pair of images flashes on and off, with intervening blanks,
in a loop called presentationLoop. You record the key press of the subject with a Keyboard Component called resp1.
Using the ‘ForceEndTrial’ parameter of resp1 you can end the current cycle of the loop but to end the loop itself you
would need a Code Component. Insert the following two lines in the End Routine parameter for the Code Component,
which will test whether more than zero keys have been pressed:
or:
if resp1.keys :
presentationLoop.finished=1
Example 2: Sometimes you may have more possible trials than you can actually display. By default, a loop will
present all possible trials (nReps * length-of-list). If you only want to present the first 10 of all possible trials, you can
use a code component to count how many have been shown, and then finish the loop after doing 10.
This example assumes that your loop is named ‘trials’. You need to add two things, the first to initialize the count, and
the second to update and check it.
Begin Experiment:
myCount = 0
Begin Routine:
myCount = myCount + 1
if myCount > 10:
trials.finished = True
Note: In Python there is no end to finish an if statement. The content of the if or of a for-loop is determined by
the indentation of the lines. In the above example only one line was indented so that one line will be executed if the
statement evaluates to True.
For running PsychoPy in a classroom environment it is probably preferable to have a ‘partial’ network installation.
The PsychoPy library features frequent new releases, including bug fixes and you want to be able to update machines
with these new releases. But PsychoPy depends on many other python libraries (over 200Mb in total) that tend not
to change so rapidly, or at least not in ways critical to the running of experiments. If you install the whole PsychoPy
application on the network then all of this data has to pass backwards and forwards, and starting the app will take even
longer than normal.
The basic aim of this document is to get to a state whereby;
• Python and the major dependencies of PsychoPy are installed on the local machine (probably a disk image to be
copied across your lab computers)
• PsychoPy itself (only ~2Mb) is installed in a network location where it can be updated easily by the administrator
• a file is created in the installation that provides the path to the network drive location
• Start-Menu shortcuts need to be set to point to the local Python but the remote PsychoPy application launcher
Once this is done, the vast majority of updates can be performed simply by replacing the PsychoPy library on the
network drive.
Download the latest version of the Standalone PsychoPy distribution, and run as administrator. This will install a copy
of Python and many dependencies to a default location of
C:\Program Files\PsychoPy2\
You need a network location that is going to be available, with read-only access, to all users on your machines. You
will find all the contents of PsychoPy itself at something like this (version dependent obviously):
C:\Program Files\PsychoPy2\Lib\site-packages\PsychoPy-1.70.00-py2.6.egg
Move that entire folder to your network location and call it psychopyLib (or similar, getting rid of the version-specific
part of the name). Now the following should be a valid path:
<NETWORK_LOC>\psychopyLib\psychopy
The Python installation (in C:\Program Files\PsychoPy2) needs to know about the network location. If Python finds a
text file with extension .pth anywhere on its existing path then it will add to the path any valid paths it finds in the file.
So create a text file that has one line in it:
<NETWORK_LOC>\psychopyLib
You can test if this has worked. Go to C:\Program Files\PsychoPy2 and double-click on python.exe. You should get a
Python terminal window come up. Now try:
If psychopy is not found on the path then there will be an import error. Try adjusting the .pth file, restarting python.exe
and importing again.
The shortcut in the Windows Start Menu will still be pointing to the local (now non-existent) PsychoPy library. Right-
click it to change properties and set the shortcut to point to something like:
You probably spotted from this that the PsychoPy app is simply a Python script. You may want to update the file
associations too, so that .psyexp and .py are opened with:
Lastly, to make the shortcut look pretty, you might want to update the icon too. Set the icon’s location to:
"<NETWORK_LOC>\psychopyLib\psychopy\app\Resources\psychopy.ico"
Fetch the latest .zip release. Unpack it and replace the contents of <NETWORK_LOC>\psychopyLib\ with the contents
of the zip file.
A formatted string is a variable which has been converted into a string (text). In python the specifics of how this is
done is determined by what kind of variable you want to print.
Example 1: You have an experiment which generates a string variable called text. You want to insert this variable into
a string so you can print it. This would be achieved with the following code:
This will produce a variable message which if used in a text object would print the phrase ‘The result is’ followed by
the variable text. In this instance %s is used as the variable being entered is a string. This is a marker which tells the
script where the variable should be entered. %text tells the script which variable should be entered there.
Multiple formatted strings (of potentially different types) can be entered into one string object:
>>> x=5
>>> x1=5124
>>> z='someText'
>>> 'show %s' %(z)
'show someText'
>>> '%0.1f' %(x) #will show as a float to one decimal place
'5.0'
>>> '%3i' %(x) #an integer, at least 3 chars wide, padded with spaces
' 5'
>>> '%03i' %(x) #as above but pad with zeros (good for participant numbers)
'005'
Often psychophysicists using staircase procedures want to interleave multiple staircases, either with different start
points, or for different conditions.
There is now a class, psychopy.data.MultiStairHandler to allow simple access to interleaved staircases of
either basic or QUEST types. That can also be used from the Loops in the Builder. The following method allows the
same to be created in your own code, for greater options.
The method works by nesting a pair of loops, one to loop through the number of trials and another to loop across the
staircases. The staircases can be shuffled between trials, so that they do not simply alternate.
Note: Note the need to create a copy of the info. If you simply do thisInfo=info then all your staircases will end up
pointing to the same object, and when you change the info in the final one, you will be changing it for all.
win=visual.Window([400,400])
#---------------------
#create the stimuli
#---------------------
#create staircases
stairs=[]
for thisStart in info['startPoints']:
#we need a COPY of the info for each staircase
#(or the changes here will be made to all the other staircases)
thisInfo = copy.copy(info)
#now add any specific info for this staircase
thisInfo['thisStart']=thisStart #we might want to keep track of this
thisStair = data.StairHandler(startVal=thisStart,
extraInfo=thisInfo,
nTrials=50, nUp=1, nDown=3,
minVal = 0.5, maxVal=8,
stepSizes=[4,4,2,2,1,1])
stairs.append(thisStair)
#then loop through our randomised order of staircases for this repeat
(continues on next page)
#---------------------
#run your trial and get an input
#---------------------
keys = event.waitKeys() #(we can simulate by pushing left for 'correct')
if 'left' in keys: wasCorrect=True
else: wasCorrect = False
#save data (separate pickle and txt files for each staircase)
dateStr = time.strftime("%b_%d_%H%M", time.localtime())#add the current time
for thisStair in stairs:
#create a filename based on the subject and start value
filename = "%s start%.2f %s" %(thisStair.extraInfo['observer'], thisStair.
˓→extraInfo['thisStart'], dateStr)
thisStair.saveAsPickle(filename)
thisStair.saveAsText(filename)
maxR=46cd/m2
maxG=114
maxB=15
Note that, if you want a pure fully-saturated blue, then you’re limited by the monitor to how bright you can make your
stimulus. If you want brighter colours your blue will need to include some of the other guns (similarly for green if you
want to go above the max luminance for that gun).
A2.1. You should also consider that even if you set appropriate RGB values to display your pairs of chromatic stimuli
at the same luminance that they might still appear different, particularly between observers (and even if your light
measurement device says the luminance is the same, and regardless of the colour space you want to work in). To make
the pairs perceptually isoluminant, each observer should really determine their own isoluminant point. You can do this
with the minimum motion technique or with heterochromatic flicker photometry.
mywin.setMouseVisible(False)
capture = cv.CaptureFromCAM(0)
img = cv.QueryFrame(capture)
pi = Image.fromstring("RGB", cv.GetSize(img), img.tostring(), "raw", "BGR", 0, 1)
print(pi.size)
myStim = visual.GratingStim(win=mywin, tex=pi, pos=[0,0.5], size = [0.6,0.6], opacity
˓→= 1.0, units = 'norm')
myStim.setAutoDraw(True)
while True:
img = cv.QueryFrame(capture)
pi = Image.fromstring("RGB", cv.GetSize(img), img.tostring(), "raw", "BGR", 0, 1)
(continues on next page)
ELEVEN
So far PsychoPy supports bits++ only in the bits++ mode (rather than mono++ or color++). In this mode, a code (the
T-lock code) is written to the lookup table on the bits++ device by drawing a line at the top of the window. The most
likely reason that the demo isn’t working for you is that this line is not being detected by the device, and so the lookup
table is not being modified. Most of these problems are actually nothing to do with PsychoPy /per se/, but to do with
your graphics card and the CRS bits++ box itself.
There are a number of reasons why the T-lock code is not being recognised:
• the bits++ device is in the wrong mode. Open the utility that CRS supply and make sure you’re in the right
mode. Try resetting the bits++ (turn it off and on).
• the T-lock code is not fully on the screen. If you create a window that’s too big for the screen or badly positioned
then the code will be broken/not visible to the device.
• the T-lock code is on an ‘odd’ pixel.
• the graphics card is doing some additional filtering (win32). Make sure you turn off any filtering in the advanced
display properties for your graphics card
• the gamma table of the graphics card is not set to be linear (but this should normally be handled by PsychoPy,
so don’t worry so much about it).
• you’ve got a Mac that’s performing temporal dithering (new Macs, around 2009). Apple have come up with a
new, very annoying idea, where they continuously vary the pixel values coming out of the graphics card every
frame to create additional intermediate colours. This will break the T-lock code on 1/2-2/3rds of frames.
This question is common enough and complex enough to have a section of the manual all of its own. See Timing Issues
and synchronisation
425
PsychoPy - Psychology software for Python, Release 3.2.0
TWELVE
12.1 Workshops
At Nottingham we run an annual workshop on Python/PsychoPy (ie. programming, not Builder). Please see the page
on officialWorkshops for further details.
• Youtube PsychoPy tutorial showing how to build a basic experiment in the Builder interface. That’s a great way
to get started; build your own complete experiment in 15 minutes flat!
• There’s also a subtitled version of the stroop video tutorial (Thanks Kevin Cole for doing that!)
• Jason Ozubko has added a series of great PsychoPy Builder video tutorials too
• Damien Mannion added a similarly great series of PsychoPy programming videos on YouTube
• The most comprehensive guide is the book Building Experiments in PsychoPy by Peirce and MacAskill. The
book is suitable for a wide range of needs and skill sets, with 3 sections for:
– The Beginner (suitable for undergraduate teaching)
– The Professional (more detail for creating more precise studies)
– The Specialist (with info about specialist needs such as studies in fMRI, EEG, . . . )
• At School of Psychology, University of Nottingham, PsychoPy is now used for all first year practical class
teaching. The classes that comprise that first year course are provided below. They were created partially with
funding from the former Higher Education Academy Psychology Network. Note that the materials here will
be updated frequently as they are further developed (e.g. to update screenshots etc) so make sure you have the
latest version of them!
PsychoPy_pracs_2011v2.zip (21MB) (last updated: 15 Dec 2011)
• The GestaltReVision group (University of Leuven) wiki covering PsychoPy (some Builder info and great tuto-
rials for Python/PsychoPy coding of experiments).
427
PsychoPy - Psychology software for Python, Release 3.2.0
• There’s a set of tools for teaching psychophysics using PsychoPy and a PsychoPysics poster from VSS. Thanks
James Ferwerda
• Please see the page on officialWorkshops for further details on coming to an intensive residential Python work-
shop in Nottingham.
• Marco Bertamimi’s book, Programming Illusions for Everyone is a fun way to learn about stimulus rendering
in PsychoPy by learning how to create visual illusions
• Gary Lupyan runs a class on programming experiments using Python/PsychoPy and makes his lecture materials
available on this wiki
• The GestaltReVision group (University of Leuven) offers a three-day crash course to Python and PsychoPy
on a IPython Notebook, and has lots of great information taking you from basic programming to advanced
techniques.
• Radboud University, Nijmegen also has a PsychoPy programming course
• Programming for Psychology in Python - Vision Science has lessons and screencasts on PsychoPy (by Damien
Mannion, UNSW Australia).
• ECEM, August 2013 : Python for eye-tracking workshop with (Sol Simpson, Michael MacAskill and Jon
Peirce). Download Python-for-eye-tracking materials
• VSS
• Yale, 21-23 July : The first ever dedicated PsychoPy workshop/conference was at Yale, 21-23 July 2011. Thanks
Jeremy for organising!
• EPS Satellite workshop, 8 July 2011
• BPS Maths Stats and Computing Section workshop (Dec 2010):
For developers:
THIRTEEN
FOR DEVELOPERS
Note: Much of the following is explained with more detail in the nitime documentation, and then in further detail in
numerous online tutorials.
13.1.1 Workflow
The use of git and the following workflow allows people to contribute changes that can easily be incorporated back
into the project, while (hopefully) maintaining order and consistency in the code. All changes should be tracked and
reversible.
• Create a fork of the central psychopy/psychopy repository
• Create a local clone of that fork
• For small changes
– make the changes directly in the master branch
– push back to your fork
– submit a pull request to the central repository
• For substantial changes (new features)
– create a branch
– when finished run unit tests
– when the unit tests pass merge changes back into the master branch
– submit a pull request to the central repository
429
PsychoPy - Psychology software for Python, Release 3.2.0
Go to github, create an account and make a fork of the psychopy repository You can change your fork in any way you
choose without it affecting the central project. You can also share your fork with others, including the central project.
Install git on your computer. Create and upload an ssh key to your github account - this is necessary for you to push
changes back to your fork of the project at github.
Then, in a folder of your choosing fetch your fork:
The last line connects your copy (with read access) to the central server so you can easily fetch any updates to the
central repository.
Periodically it’s worth fetching any changes to the central psychopy repository (into your master branch, more on that
below):
Now that you’ve fetched the latest version of psychopy using git, you should run this version in order to try out
yours/others latest improvements. See this guide on how to permanently run your git version of psychopy instead of
the version you previously installed.
Run git version for just one session (Linux and Mac only): If you want to switch between the latest-and-greatest
development version from git and the stable version installed on your system, you can choose to only temporarily run
the git version. Open a terminal and set a temporary python path to your psychopy git folder:
$ export PYTHONPATH=/path/to/local/git/folder/
To check that worked you should open python in the terminal and try to import psychopy:
$ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import psychopy
PsychoPy depends on a lot of other packages and you may get a variety of failures to import them until you have them
all installed in your custom environment!
You can make minor changes directly in the master branch of your fork. After making a change you need to commit a
set of changes to your files with a message. This enables you to group together changes and you will subsequently be
able to go back to any previous commit, so your changes are reversible.
I (Jon) usually do this by opening the graphical user interface that comes with git:
$ git gui
From the GUI you can select (or stage in git terminology) the files that you want to include in this particular commit
and give it a message. Give a clear summary of the changes for the first line. You can add more details about the
changes on lower lines if needed.
If you have internet access then you could also push your changes back up to your fork (which is called your origin
by default), either by pressing the push button in the GUI or by closing that and typing:
$ git push
Informative commit messages are really useful when we have to go back through the repository finding the time that a
particular change to the code occurred. Precede your message with one or more of the following to help us spot easily
if this is a bug fix (which might need pulling into other development branches) or new feature (which we might want
to avoid pulling in if it might disrupt existing code).
• BF : bug fix
• FF : ‘feature’ fix. This is for fixes to code that hasn’t been released
• RF : refactoring
• NF : new feature
• ENH : enhancement (improvement to existing code)
• DOC: for all kinds of documentation related commits
• TEST: for adding or changing tests
When making commits that fall into several commit categories (e.g., BF and TEST), please make separate commits
for each category and avoid concatenating commit message prefixes. E.g., please do not use BF/TEST, because
this will affect how commit messages are sorted when we pull in fixes for each release.
NB: The difference between BF and FF is that BF indicates a fix that is appropriate for back-porting to earlier versions,
whereas FF indicates a fix to code that has not been released, and so cannot be back-ported.
Only a couple of people have direct write-access to the psychopy repository, but you can get your changes included
in upstream by pushing your changes back to your github fork and then submitting a pull request. Communication is
good, and hopefully you have already been in touch (via the user or dev lists) about your changes.
When adding an improvement or new feature, consider how it might impact others. Is it likely to be generally useful,
or is it something that only you or your lab would need? (It’s fun to contribute, but consider: does it actually need
to be part of PsychoPy?) Including more features has a downside in terms of complexity and bloat, so try to be sure
that there is a “business case” for including it. If there is, try at all times to be backwards compatible, e.g., by adding
a new keyword argument to a method or function (not always possible). If it’s not possible, it’s crucial to get wider
input about the possible impacts. Flag situations that would break existing user scripts in your commit messages.
Part of sharing your code means making things sensible to others, which includes good coding style and writing some
documentation. You are the expert on your feature, and so are in the best position to elaborate nuances or gotchas. Use
meaningful variable names, and include comments in the code to explain non-trivial things, especially the intention
behind specific choices. Include or edit the appropriate doc-string, because these are automatically turned into API
documentation (via sphinx). Include doc-tests if that would be meaningful. The existing code base has a comment /
code ratio of about 28%, which earns it high marks.
For larger changes and especially new features, you might need to create some usage examples, such as a new Coder
demo, or even a Builder demo. These can be invaluable for being a starting point from which people can adapt things
to the needs of their own situation. This is a good place to elaborate usage-related gotchas.
In terms of style, try to make your code blend in with and look like the existing code (e.g., using about the same level
of comments, use camelCase for var names, despite the conflict with the usual PEP – we’ll eventually move to the
underscore style, but for now keep everything consistent within the code base). In your own code, write however you
like of course. This is just about when contributing to the project.
For more substantial work, you should create a new branch in your repository. Often while working on a new feature
other aspects of the code will get broken and the master branch should always be in a working state. To create a new
branch:
You can push your new branch back to your fork (origin) with:
When you’re done run the unit tests for your feature branch. Set the debug preference setting (in the app section) to
True, and restart psychopy. This will enable access to the test-suite. In debug mode, from the Coder (not Builder) you
can now do Ctrl-T / Cmd-T (see Tools menu, Unit Testing) to bring up the unit test window. You can select a subset
of tests to run, or run them all.
It’s also possible to run just selected tests, such as doctests within a single file. From a terminal window:
If the tests pass you hopefully haven’t damaged other parts of PsychoPy (!?). If possible add a unit test for your new
feature too, so that if other people make changes they don’t break your work!
You can merge your changes back into your master branch with:
Merge conflicts happen, and need to be resolved. If you configure your git preferences (~/.gitconfig) to include:
[merge]
summary = true
log = true
tool = opendiff
then you’ll be able to use a handy GUI interface (opendiff) for reviewing differences and conflicts, just by typing:
git mergetool
from the command line after hitting a merge conflict (such as during a git pull upstream master).
Once you’ve folded your new code back into your master and pushed it back to your github fork then it’s time to Share
your improvement with others.
There are several ways to add documentation, all of them useful: doc strings, comments in the code, and demos to
show an example of actual usage. To further explain something to end-users, you can create or edit a .rst file that will
automatically become formatted for the web, and eventually appear on www.psychopy.org.
You make a new file under psychopy/docs/source/, either as a new file or folder or within an existing one.
To test that your doc source code (.rst file) does what you expect in terms of formatting for display on the web, you
can simply do something like (this is my actual path, unlikely to be yours):
$ cd /Users/jgray/code/psychopy/docs/
$ make html
Do this within your docs directory (requires sphinx to be installed, try “easy_install sphinx” if it’s not working). That
will add a build/html sub-directory.
Then you can view your new doc in a browser, e.g., for me:
file:///Users/jgray/code/psychopy/docs/build/html/
Push your changes to your github repository (using a “DOC:” commit message) and let Jon know, e.g. with a pull
request.
Builder Components are auto-detected and displayed to the experimenter as icons (in the right-most panel of the
Builder interface panel). This makes it straightforward to add new ones.
All you need to do is create a list of parameters that the Component needs to know about (that will automatically
appear in the Component’s dialog) and a few pieces of code specifying what code should be called at different points
in the script (e.g. beginning of the Routine, every frame, end of the study etc. . . ). Many of these will come simply
from subclassing the _base or _visual Components.
To get started, Add a new feature branch for the development of this component. (If this doesn’t mean anything to you
then see Using the repository )
You’ll mainly be working in the directory . . . /psychopy/experiment/components/. Take a look at several existing
Components (such as image.py), and key files including _base.py and _visual.py.
There are three main steps, the first being by far the most involved.
It’s most straightforward to model a new Component on one of the existing ones. Be prepared to specify what your
Component needs to do at several different points in time: the first trial, every frame, at the end of each routine, and at
the end of the experiment. In addition, you may need to sacrifice some complexity in order to keep things streamlined
enough for a Builder (see e.g., ratingscale.py).
Your new Component class (in your file newcomp.py) should inherit from BaseComponent (in _base.py), VisualCom-
ponent (in _visual.py), or KeyboardComponent (in keyboard.py). You may need to rewrite some or all some of these
methods, to override default behavior:
Calling super() will create the basic default set of params that almost every component will need: name, startVal, start-
Type, etc. Some of these fields may need to be overridden (e.g., durationEstim in sound.py). Inheriting from Visual-
Component (which in turn inherits from BaseComponent) adds default visual params, plus arranges for Builder scripts
to import psychopy.visual. If your component will need other libs, call self.exp.requirePsychopyLib([‘neededLib’])
(see e.g., parallelPort.py).
At the top of a component file is a dict named _localized. It contains mappings that allow a strict separation of internal
string values (= used in logic, never displayed) from values used for display in the Builder interface (= for display only,
possibly translated, never used in logic). The .hint and .label fields of params[‘someParam’] should always be set to
a localized value, either by using a dict entry such as _localized[‘message’], or via the globally available translation
function, _(‘message’). Localized values must not be used elsewhere in a component definition.
Very occasionally, you may also need to edit settings.py, which writes out the set-up code for the whole experiment
(e.g., to define the window). For example, this was necessary for the ApertureComponent, to pass allowStencil=True
to the window creation.
Your new Component writes code into a buffer that becomes an executable python file, xxx_lastrun.py (where xxx is
whatever the experimenter specifies when saving from the Builder, xxx.psyexp). You will do a bunch of this kind of
call in your newcomp.py file:
buff.writeIndented(your_python_syntax_string_here)
You have to manage the indentation level of the output code, see experiment.IndentingBuffer().
xxx_lastrun.py is the file that gets built when you run xxx.psyexp from the Builder. So you will want to look at
xxx_lastrun.py frequently when developing your component.
Name-space
There are several internal variables (i.e. names of Python objects) that have a specific, hardcoded meaning within
xxx_lastrun.py. You can expect the following to be there, and they should only be used in the original way (or
something will break for the end-user, likely in a mysterious way):
Handling of variable names is under active development, so this list may well be out of date. (If so, you might consider
updating it or posting a note to the PsychoPy Discourse developer forum.)
Preliminary testing suggests that there are 600-ish names from numpy or numpy.random, plus the following:
['KeyResponse', '__builtins__', '__doc__', '__file__', '__name__', '__package__',
˓→'buttons', 'core', 'data', 'dlg', 'event', 'expInfo', 'expName', 'filename', 'gui',
Yet other names get derived from user-entered names, like trials –> thisTrial.
Params
self.params is a key construct that you build up in __init__. You need name, startTime, duration, and several other
params to be defined or you get errors. ‘name’ should be of type ‘code’.
The Param() class is defined in psychopy.app.builder.experiment.Param(). A very useful thing that Params know is
how to create a string suitable for writing into the .py script. In particular, the __str__ representation of a Param will
format its value (.val) based on its type (.valType) appropriately. This means that you don’t need to check or handle
whether the user entered a plain string, a string with a code trigger character ($), or the field was of type code in the
first place. If you simply request the str() representation of the param, it is formatted correctly.
To indicate that a param (eg, thisParam) should be considered as an advanced feature, set its category to advanced:
self.params[‘thisParam’].categ = ‘Advanced’. Then the GUI shown to the experimenter will automatically place it on
the ‘Advanced’ tab. Other categories work similarly (Custom, etc).
During development, it can sometimes be helpful to save the params into the xxx_lastrun.py file as comments, so you
can see what is happening:
def writeInitCode(self,buff):
# for debugging during Component development:
buff.writeIndented("# self.params for aperture:\n")
for p in self.params:
try: buff.writeIndented("# %s: %s <type %s>\n" % (p, self.params[p].val, self.
˓→params[p].valType))
except: pass
syntax errors in new_comp.py: The PsychoPy app will fail to start if there are syntax error in any of the
components that are auto-detected. Just correct them and start the app again.
param[].val: If you have a boolean variable (e.g., my_flag) as one of your params, note that
self.param[“my_flag”] is always True (the param exists –> True). So in a boolean context you
almost always want the .val part, e.g., if self.param[“my_flag”].val:.
However, you do not always want .val. Specifically, in a string/unicode context (= to trigger the self-
formatting features of Param()s), you almost always want “%s” % self.param[‘my_flag’], without
.val. Note that it’s better to do this via “%s” than str() because str(self.param[“my_flag”]) coerces
things to type str (squashing unicode) whereas %s works for both str and unicode.
Travis testing Before submitting a pull request with the new component, you should regenerate the com-
ponsTemplate.txt file. This is a text file that lists the attributes of all of the user interface settings
and options in the various components. It is used during the Travis automated testing process when
a pull request is submitted to GitHub, allowing the detection of errors that may have been caused
in refactoring. Your new component needs to have entries added to this file if the Travis testing is
going to pass successfully.
To re-generate the file, cd to this directory . . . /psychopy/tests/test_app/test_builder/ and run:
This will over-write the existing file so you might want to make a copy in case the process fails.
Compatibility issues: As at May 2018, that script is not yet Python 3 compatible, and on a Mac you
might need to use pythonw.
Using your favorite image software, make an icon for your Component with a descriptive name, e.g., newcomp.png.
Dimensions = 48 × 48. Put it in the components directory.
In newcomp.py, have a line near the top:
Just make a descriptively-named text file that ends in .rst (“restructured text”), and put
it in psychopy/docs/source/builder/components/ . It will get auto-formatted and end up at
http://www.psychopy.org/builder/components/newcomp.html
Each coder demo is intended to illustrate a key PsychoPy feature (or two), especially in ways that show usage in
practice, and go beyond the description in the API. The aim is not to illustrate every aspect, but to get people up to
speed quickly, so they understand how basic usage works, and could then play around with advanced features.
As a newcomer to PsychoPy, you are in a great position to judge whether the comments and documentation are
clear enough or not. If something is not clear, you may need to ask a PsychoPy contributor for a description; email
[email protected].
Here are some style guidelines, written for the OpenHatch event(s) but hopefully useful after that too. These are
intended specifically for the coder demos, not for the internal code-base (although they are generally quite close).
The idea is to have clean code that looks and works the same way across demos, while leaving the functioning mostly
untouched. Some small changes to function might be needed (e.g., to enable the use of ‘escape’ to quit), but typically
only minor changes like this.
• Generally, when you run the demo, does it look good and help you understand the feature? Where might there
be room for improvement? You can either leave notes in the code in a comment, or include them in a commit
message.
• Standardize the top stuff to have 1) a shebang with python, 2) utf-8 encoding, and 3) a comment:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Demo name, purpose, description (1-2 sentences, although some demos need more
˓→explanation).
"""
For the comment / description, it’s a good idea to read and be informed by the relevant parts of the API (see http:
//psychopy.org/api/api.html), but there’s no need to duplicate that text in your comment. If you are unsure, please post
to the dev list [email protected].
• Follow PEP-8 mostly, some exceptions:
– current PsychoPy convention is to use camelCase for variable names, so don’t convert those to underscores
– 80 char columns can spill over a little. Try to keep things within 80 chars most of the time.
– do allow multiple imports on one line if they are thematically related (e.g., import os, sys, glob).
– inline comments are ok (because the code demos are intended to illustrate and explain usage in some detail,
more so than typical code).
• Check all imports:
– remove any unnecessary ones
– replace import time with from psychopy import core. Use core.getTime() (= ms since the script started) or
core.getAbsTime() (= seconds, unix-style) instead of time.time(), for all time-related functions or methods
not just time().
– add from __future__ import division, even if not needed. And make sure that doing so does not break the
demo!
• Fix any typos in comments; convert any lingering British spellings to US, e.g., change colour to color
• Prefer if <boolean>: as a construct instead of if <boolean> == True:. (There might not be any to change).
• If you have to choose, opt for more verbose but easier-to-understand code instead of clever or terse formulations.
This is for readability, especially for people new to python. If you are unsure, please add a note to your commit
message, or post a question to the dev list [email protected].
• Standardize variable names:
– use win for the visual.Window(), and so win.flip()
• Provide a consistent way for a user to exit a demo using the keyboard, ideally enable this on every visual frame:
use if len(event.getKeys([‘escape’]): core.quit(). Note: if there is a previous event.getKeys() call, it can slurp up
the ‘escape’ keys. So check for ‘escape’ first.
• Time-out after 10 seconds, if there’s no user response and a timeout is appropriate for the demo (and a longer
time-out might be needed, e.g., for ratingScale.py):
• Most demos are not full screen. For any that are full-screen, see if it can work without being full screen. If it
has to be full-screen, add some text to say that pressing ‘escape’ will quit.
• If displaying log messages to the console seems to help understand the demo, here’s how to do it:
• End a script with win.close() (assuming the script used a visual.Window), and then core.quit() even though it’s
not strictly necessary
PsychoPy is used worldwide. Starting with v1.81, many parts of PsychoPy itself (the app) can be translated into any
language that has a unicode character set. A translation affects what the experimenter sees while creating and running
experiments; it is completely separate from what is shown to the subject. Translations of the online documentation
will need a completely different approach.
In the app, translation is handled by a function, _translate(), which takes a string argument. (The standard name
is _(), but unfortunately this conflicts with _ as used in some external packages that PsychoPy depends on.) The
_translate() function returns a translated, unicode version of the string in the locale / language that was selected
when starting the app. If no translation is available for that locale, the original string is returned (= English).
A locale setting (e.g., ‘ja_JP’ for Japanese) allows the end-user (= the experimenter) to control the language that will
be used for display within the app itself. (It can potentially control other display conventions as well, not just the
language.) PsychoPy will obtain the locale from the user preference (if set), or the OS.
Workflow: 1) Make a translation from English (en_US) to another language. You’ll need a strong understanding of
PsychoPy, English, and the other language. 2) In some cases it will be necessary to adjust PsychoPy’s code, but only if
new code has been added to the app and that code displays text. Then re-do step 1 to translate the newly added strings.
See notes in psychopy/app/localization/readme.txt.
As a translator, you will likely introduce many new people to PsychoPy, and your translations will greatly influence
their experience. Try to be completely accurate; it is better to leave something in English if you are unsure how
PsychoPy is supposed to work.
To translate a given language, you’ll need to know the standard 5-character code (see psy-
chopy/app/localization/mappings). E.g., for Japanese, wherever LANG appears in the documentation here,
you should use the actual code, i.e., “ja_JP” (without quotes).
A free app called poedit is useful for managing a translation. For a given language, the translation mappings (from
en_US to LANG) are stored in a .po file (a text file with extension .po); after editing with poedit, these are converted
into binary format (with extension .mo) which are used when the app is running.
• Start translation (do these steps once):
Start a translation by opening psychopy/app/locale/LANG/LC_MESSAGE/messages.po in Poedit. If there is no
such .po file, create a new one:
– make a new directory psychopy/app/locale/LANG/LC_MESSAGE/ if needed. Your LANG will be auto-
detected within PsychoPy only if you follow this convention. You can copy metadata (such as the project
name) from another .po file.
Set your name and e-mail address from “Preferences. . . ” of “File” menu. Set translation properties (such as
project name, language and charset) from Catalog Properties Dialog, which can be opened from “Properties. . . ”
of “Catalog” menu.
In poedit’s properties dialog, set the “source keywords” to include ‘_translate’. This allows poedit to find the
strings in PsychoPy that are to be translated.
To add paths where Poedit scans .py files, open “Sources paths” tab on the Catalog Properties Dialog, and set
“Base path:” to “../../../../../” (= psychopy/psychopy/). Nothing more should be needed. If you’ve created new
catalog, save your catalog to psychopy/app/locale/LANG/LC_MESSAGE/messages.po.
Probably not needed, but check anyway: Edit the file containing language code and name mappings, psy-
chopy/app/localization/mappings, and fill in the name for your language. Give a name that should be familiar
to people who read that language (i.e., use the name of the language as written in the language itself, not in
en_US). About 25 are already done.
• Edit a translation:
Open the .po file with Poedit and press “Update” button on the toolbar to update newly added / removed strings
that need to be translated. Select a string you want to translate and input your translation to “Translation:” box.
If you are unsure where string is used, point on the string in “Source text” box and right-click. You can see
where the string is defined.
• Technical terms should not be translated: Builder, Coder, PsychoPy, Flow, Routine, and so on. (See the Japanese
translation for guidance.)
• If there are formatting arguments in the original string (%s, %(first)i), the same number of arguments must
also appear in the translation (but their order is not constrained to be the original order). If they are named (e.g.,
%(first)i), that part should not be translated–here first is a python name.
• If you think your translation might have room for improvement, indicate that it is “fuzzy”. (Saving Notes does
not work for me on Mac, seems like a bug in poedit.)
• After making a new translation, saving it in poedit will save the .po file and also make an associated .mo file.
You need to update the .mo file if you want to see your changes reflected in PsychoPy.
• The start-up tips are stored in separate files, and are not translated by poedit. Instead:
• copy the default version (named psychopy/app/Resources/tips.txt) to a new file in the same directory, named
tips_LANG.txt. Then replace English-language tips with translated tips. Note that some of the humor might
not translate well, so feel free to leave out things that would be too odd, or include occasional mild humor that
would be more appropriate. Humor must be respectful and suitable for using in a classroom, laboratory, or other
professional situation. Don’t get too creative here. If you have any doubt, best leave it out. (Hopefully it goes
without saying that you should avoid any religious, political, disrespectful, or sexist material.)
• in poedit, translate the file name: translate “tips.txt” as “tips_LANG.txt”
• Commit both the .po and .mo files to github (not just one or the other), and any changed files (e.g., tips_LANG,
localization/mappings).
This is mostly complete (as of 1.81.00), but will be needed for new code that displays text to users of the app (experi-
menters, not study participants).
There are a few things to keep in mind when working on the app’s code to make it compatible with translations. If you
are only making a translation, you can skip this section.
• In PsychoPy’s code, the language to be used should always be English with American spellings (e.g., “color”).
• Within the app, the return value from _translate() should be used only for display purposes: in menus,
tooltips, etc. A translated value should never be used as part of the logic or internal functioning of PsychoPy. It
is purely a “skin”. Internally, everything must be in en_US.
• Basic usage is exactly what you expect: _translate("hello") will return a unicode string at run-time,
using mappings for the current locale as provided by a translator in a .mo file. (Not all translations are available
yet, see above to start a new one.) To have the app display a translated string to the experimenter, just display
the return value from the underscore translation function.
• The strings to be translated must appear somewhere in the app code base as explicit strings within
_translate(). If you need to translate a variable, e.g., named str_var using the expression
_translate(str_var), somewhere else you need to explicitly give all the possible values that str_var
can take, and enclose each of them within the translate function. It is okay for that to be elsewhere, even in
another file, but not in a comment. This allows poedit to discover of all the strings that need to be translated.
(This is one of the purposes of the _localized dict at the top of some modules.)
• _translate() should not be given a null string to translate; if you use a variable, check that it is not ‘’ to
avoid invoking _translate('').
• Strings that contain formatting placeholders (e.g., %d, %s, %.4f) require a little more thought. Single place-
holders are easy enough: _translate("hello, %s") % name.
• Strings with multiple formatting placeholders require named arguments, because positional arguments
are not always sufficient to disambiguate things depending on the phrase and the language to be
translated into: _translate("hello, %(first)s %(last)s") % {'first': firstname,
'last': lastname}
• Localizing drop-down menus is a little more involved. Such menus should display localized strings, but
return selected values as integers (GetSelection() returns the position within the list). Do not use
GetStringSelection(), because this will return the localized string, breaking the rule about a strict sep-
aration of display and logic. See Builder ParamDialogs for examples.
When there are more translations (and if they make the app download large) we might want to manage things differ-
ently (e.g., have translations as a separate download from the app).
Adding a new menu-item to the Builder (or Coder) is relatively straightforward, but there are several files that need to
be changed in specific ways.
13.6.1 1. makeMenus()
The code that constructs the menus for the Builder is within a method named makeMenus(), within class
builder.BuilderFrame(). Decide which submenu your new command fits under, and look for that section (e.g., File,
Edit, View, and so on). For example, to add an item for making the Routine panel items larger, I added two lines within
the View menu, by editing the makeMenus() method of class BuilderFrame within psychopy/app/builder/builder.py
(similar for Coder):
Note the use of the translation function, _(), for translating text that will be displayed to users (menu listing, hint).
13.6.2 2. wxIDs.py
A new item needs to have a (numeric) ID so that wx can keep track of it. Here, the number is self.IDs.tbIncrRoutineSize,
which I had to define within the file psychopy/app/wxIDs.py:
tbIncrRoutineSize=180
It’s possible that, instead of hard-coding it like this, it’s better to make a call to wx.NewIdRef() – wx will take care of
avoiding duplicate IDs, presumably.
I also defined a key to use to as a keyboard short-cut for activating the new menu item:
self.app.keys['largerRoutine']
The actual key is defined in a preference file. Because psychopy is multi-platform, you need to add info to four
different .spec files, all of them being within the psychopy/preferences/ directory, for four operating systems (Darwin,
FreeBSD, Linux, Windows). For Darwin.spec (meaning macOS), I added two lines. The first line is not merely a
comment: it is also automatically used as a tooltip (in the preferences dialog, under key-bindings), and the second
being the actual short-cut key to use:
This means that the user has to hold down the Ctrl key and then press the + key. Note that on Macs, ‘Ctrl’ in the spec
is automatically converted into ‘Cmd’ for the actual key to use; in the .spec, you should always specify things in terms
of ‘Ctrl’ (and not ‘Cmd’). The default value is the key-binding to use unless the user defines another one in her or his
preferences (which then overrides the default). Try to pick a sensible key for each operating system, and update all
four .spec files.
The second line within makeMenus() adds the key-binding definition into wx’s internal space, so that when the key is
pressed, wx knows what to do. In the example, it will call the method self.routinePanel.increaseSize, which I had to
define to do the desired behavior when the method is called (in this case, increment an internal variable and redraw the
routine panel at the new larger size).
13.6.5 5. Documentation
To let people know that your new feature exists, add a note about your new feature in the CHANGELOG.txt, and
appropriate documentation in .rst files.
Happy Coding Folks!!
FOURTEEN
The file format used to save experiments constructed in PsychoPy builder was created especially for the purpose, but
is an open format, using a basic xml form, that may be of use to other similar software. Indeed the builder itself could
be used to generate experiments on different backends (such as Vision Egg, PsychToolbox or PyEPL). The xml format
of the file makes it extremely platform independent, as well as moderately(?!) easy to read by humans. There was a
further suggestion to generate an XSD (or similar) schema against which psyexp files could be validated. That is a low
priority but welcome addition if you wanted to work on it(!) There is a basic XSD (XML Schema Definition) available
in psychopy/app/builder/experiment.xsd.
The simplest way to understand the file format is probably simply to create an experiment, save it and open the file
in an xml-aware editor/viewer (e.g. change the file extension from .psyexp to .xml and then open it in Firefox). An
example (from the stroop demo) is shown below.
The file format maps fairly obviously onto the structure of experiments constructed with the Builder interface, as
described here. There are general Settings for the experiment, then there is a list of Routines and a Flow that describes
how these are combined.
As with any xml file the format contains object nodes which can have direct properties and also child nodes. For
instance the outermost node of the .psyexp file is the experiment node, with properties that specify the version of
PsychoPy that was used to save the file most recently and the encoding of text within the file (ascii, unicode etc.), and
with child nodes Settings, Routines and Flow.
14.1 Parameters
Many of the nodes described within this xml description of the experiment contain Param entries, representing different
parameters of that Component. Nearly all parameter nodes have a name property and a val property. The parameter
node with the name “advancedParams” does not have them. Most also have a valType property, which can take values
‘bool’, ‘code’, ‘extendedCode’, ‘num’, ‘str’ and an updates property that specifies whether this parameter is changing
during the experiment and, if so, whether it changes ‘every frame’ (of the monitor) or ‘every repeat’ (of the Routine).
14.2 Settings
The Settings node contains a number of parameters that, in PsychoPy, would normally be set in the Experiment settings
dialog, such as the monitor to be used. This node contains a number of Parameters that map onto the entries in that
dialog.
443
PsychoPy - Psychology software for Python, Release 3.2.0
14.3 Routines
This node provides a sequence of xml child nodes, each of which describes a Routine. Each Routine contains a number
of children, each specifying a Component, such as a stimulus or response collecting device. In the Builder view, the
Routines obviously show up as different tabs in the main window and the Components show up as tracks within that
tab.
14.4 Components
Each Component is represented in the .psyexp file as a set of parameters, corresponding to the entries in the appropriate
component dialog box, that completely describe how and when the stimulus should be presented or how and when the
input device should be read from. Different Components have slightly different nodes in the xml representation which
give rise to different sets of parameters. For instance the TextComponent nodes has parameters such as colour and
font, whereas the KeyboardComponent node has parameters such as forceEndTrial and correctIf.
14.5 Flow
The Flow node is rather more simple. Its children simply specify objects that occur in a particular order in time. A
Routine described in this flow must exist in the list of Routines, since this is where it is fully described. One Routine
can occur once, more than once or not at all in the Flow. The other children that can occur in a Flow are LoopInitiators
and LoopTerminators which specify the start and endpoints of a loop. All loops must have exactly one initiator and
one terminator.
14.6 Names
For the experiment to generate valid PsychoPy code the name parameters of all objects (Components, Loops and
Routines) must be unique and contain no spaces. That is, an experiment can not have two different Routines called
‘trial’, nor even a Routine called ‘trial’ and a Loop called ‘trial’.
The Parameter names belonging to each Component (or the Settings node) must be unique within that Component, but
can be identical to parameters of other Components or can match the Component name themselves. A TextComponent
should not, for example, have multiple ‘pos’ parameters, but other Components generally will, and a Routine called
‘pos’ would also be also permissible.
˓→'green', 'rgb': [-1, -1, 1], 'congruent': 0, 'corrAns': 2}, {'text': 'blue', 'rgb':
˓→[-1, -1, 1], 'congruent': 1, 'corrAns': 3}, {'text': 'blue', 'rgb': [1, -1, -1],
</LoopInitiator>
<Routine name="trial"/>
<LoopTerminator name="trials"/>
<Routine name="thanks"/>
</Flow>
</PsychoPy2experiment>
p
psychopy.clock, 201
psychopy.core, 103
psychopy.data, 204
psychopy.hardware, 231
psychopy.hardware.crs, 232
psychopy.hardware.egi, 265
psychopy.hardware.emulator, 265
psychopy.hardware.forp, 270
psychopy.hardware.iolab, 271
psychopy.hardware.joystick, 272
psychopy.hardware.minolta, 276
psychopy.hardware.pr, 277
psychopy.info, 284
psychopy.iohub.client, 286
psychopy.iohub.client.keyboard, 292
psychopy.logging, 330
psychopy.misc, 335
psychopy.parallel, 342
psychopy.preferences, 344
psychopy.sound, 345
psychopy.tools, 349
psychopy.tools.colorspacetools, 349
psychopy.tools.coordinatetools, 353
psychopy.tools.filetools, 353
psychopy.tools.gltools, 354
psychopy.tools.imagetools, 375
psychopy.tools.mathtools, 376
psychopy.tools.monitorunittools, 395
psychopy.tools.plottools, 396
psychopy.tools.typetools, 396
psychopy.tools.unittools, 396
psychopy.tools.viewtools, 398
psychopy.visual.windowframepack, 198
psychopy.visual.windowwarp, 199
pylink, 280
447
PsychoPy - Psychology software for Python, Release 3.2.0
Symbols 182
_channelCheck()
_EOS() (psychopy.sound.backend_sounddevice.SoundDeviceSound (psychopy.sound.backend_sounddevice.SoundDeviceSoun
method), 346 method), 346
_Goggles() (psychopy.hardware.crs.bits.BitsPlusPlus _checkFinished() (psychopy.data.QuestHandler method),
method), 233 219
_Goggles() (psychopy.hardware.crs.bits.BitsSharp _clip_range() (psychopy.hardware.joystick.XboxController
method), 244 method), 273
_Logger (class in psychopy.logging), 330 _createOutputArray() (psychopy.data.TrialHandler
_RTBoxDecodeResponse() (psy- method), 207
chopy.hardware.crs.bits.BitsSharp method), _createOutputArrayData() (psychopy.data.TrialHandler
244 method), 207
_ResetClock() (psychopy.hardware.crs.bits.BitsPlusPlus _createSequence() (psychopy.data.TrialHandler method),
method), 233 207
_ResetClock() (psychopy.hardware.crs.bits.BitsSharp _createTexture() (psychopy.visual.BufferImageStim
method), 244 method), 108
_calcPosRendered() (psychopy.visual.BufferImageStim _createTexture() (psychopy.visual.GratingStim method),
method), 108 118
_calcPosRendered() (psychopy.visual.GratingStim _createTexture() (psychopy.visual.ImageStim method),
method), 118 127
_calcPosRendered() (psychopy.visual.ImageStim _createTexture() (psychopy.visual.RadialStim method),
method), 127 143
_calcPosRendered() (psychopy.visual.MovieStim _decodePress() (psychopy.hardware.forp.ButtonBox
method), 134 class method), 270
_calcPosRendered() (psychopy.visual.RadialStim _delete() (psychopy.hardware.emulator.ResponseEmulator
method), 143 method), 265
_calcPosRendered() (psychopy.visual.ShapeStim _delete() (psychopy.hardware.emulator.SyncGenerator
method), 165 method), 267
_calcPosRendered() (psychopy.visual.TextStim method), _doFit() (psychopy.data.FitCumNormal method), 224
182 _doFit() (psychopy.data.FitLogistic method), 223
_calcSizeRendered() (psychopy.visual.BufferImageStim _doFit() (psychopy.data.FitNakaRushton method), 223
method), 108 _doFit() (psychopy.data.FitWeibull method), 222
_calcSizeRendered() (psychopy.visual.GratingStim _do_chunk() (psychopy.voicekey.OnsetVoiceKey
method), 118 method), 404
_calcSizeRendered() (psychopy.visual.ImageStim _drawLUTtoScreen() (psy-
method), 127 chopy.hardware.crs.bits.BitsPlusPlus method),
_calcSizeRendered() (psychopy.visual.MovieStim 233
method), 135 _drawLUTtoScreen() (psy-
_calcSizeRendered() (psychopy.visual.RadialStim chopy.hardware.crs.bits.BitsSharp method),
method), 143 244
_calcSizeRendered() (psychopy.visual.ShapeStim _drawTrigtoScreen() (psy-
method), 165 chopy.hardware.crs.bits.BitsPlusPlus method),
_calcSizeRendered() (psychopy.visual.TextStim method), 233
449
PsychoPy - Psychology software for Python, Release 3.2.0
450 Index
PsychoPy - Psychology software for Python, Release 3.2.0
Index 451
PsychoPy - Psychology software for Python, Release 3.2.0
452 Index
PsychoPy - Psychology software for Python, Release 3.2.0
Index 453
PsychoPy - Psychology software for Python, Release 3.2.0
454 Index
PsychoPy - Psychology software for Python, Release 3.2.0
Index 455
PsychoPy - Psychology software for Python, Release 3.2.0
456 Index
PsychoPy - Psychology software for Python, Release 3.2.0
Index 457
PsychoPy - Psychology software for Python, Release 3.2.0
458 Index
PsychoPy - Psychology software for Python, Release 3.2.0
Index 459
PsychoPy - Psychology software for Python, Release 3.2.0
460 Index
PsychoPy - Psychology software for Python, Release 3.2.0
Index 461
PsychoPy - Psychology software for Python, Release 3.2.0
462 Index
PsychoPy - Psychology software for Python, Release 3.2.0
Index 463
PsychoPy - Psychology software for Python, Release 3.2.0
464 Index
PsychoPy - Psychology software for Python, Release 3.2.0
Index 465
PsychoPy - Psychology software for Python, Release 3.2.0
466 Index
PsychoPy - Psychology software for Python, Release 3.2.0
Index 467
PsychoPy - Psychology software for Python, Release 3.2.0
468 Index
PsychoPy - Psychology software for Python, Release 3.2.0
Index 469
PsychoPy - Psychology software for Python, Release 3.2.0
470 Index
PsychoPy - Psychology software for Python, Release 3.2.0
Index 471
PsychoPy - Psychology software for Python, Release 3.2.0
W
wait() (in module psychopy.clock), 202
wait() (in module psychopy.core), 103
wait_for_event() (psychopy.voicekey.OnsetVoiceKey
method), 405
waitBlanking (psychopy.visual.Window attribute), 198
waitEvents() (psychopy.hardware.iolab.ButtonBox
method), 272
waitForKeys() (psychopy.iohub.client.keyboard.Keyboard
method), 294
waitForPresses() (psychopy.iohub.client.keyboard.Keyboard
method), 294
waitForReleases() (psy-
chopy.iohub.client.keyboard.Keyboard
method), 294
waitKeys() (in module psychopy.event), 227
warn() (in module psychopy.logging), 332
warning() (in module psychopy.logging), 332
Warper (class in psychopy.visual.windowwarp), 199
wav2flac() (in module psychopy.microphone), 334
win (psychopy.hardware.crs.bits.BitsSharp attribute), 263
win (psychopy.visual.BufferImageStim attribute), 114
win (psychopy.visual.GratingStim attribute), 125
win (psychopy.visual.ImageStim attribute), 133
win (psychopy.visual.MovieStim attribute), 138
win (psychopy.visual.RadialStim attribute), 151
win (psychopy.visual.ShapeStim attribute), 170
win (psychopy.visual.TextStim attribute), 189
Window (class in psychopy.visual), 189
wrapWidth (psychopy.visual.TextStim attribute), 189
write() (psychopy.logging.LogFile method), 330
X
XboxController (class in psychopy.hardware.joystick),
272
xlsx, 28
xydist() (in module psychopy.event), 228
472 Index