Entry 5

../_images/entry5.png

Authors

  • Daniel McCloy

This figure illustrates the time course of a single trial in an experiment investigating auditory spatial attention. The task is oddball detection: listeners are told the category of words present in a spatial stream, and press a button when they hear a word in that stream that does not match the category. Only two of the four streams are cued as “relevant” on a given trial, so oddballs in the other streams should be ignored. Part (A) illustrates the spatial origins of the four auditory streams, and the category names used to cue the listener to the locations of the relevant (dark text) and irrelevant (light text) streams. Stream locations in the azimuthal plane are indicated spatially, and also given as values along the ordinate in part (B), in degrees relative to the direction the listener is facing. Part (B) shows the trial time course, with white background bands indicating the “relevant” streams; the durations of individual words are marked by the width of the small gray rectangles, which also show the text and waveforms of each word. The relevant-stream oddballs are shown in green, and the oddballs in ignored streams are shown in red.

The experiment monitors the listener’s brain activity during the task, using simultaneous magneto- and electro-encephalography (M/EEG), and compares trials where the “relevant” streams are spatially separated (as shown here) or spatially adjacent. The purpose is to understand whether listeners monitor multiple auditory streams by rapidly switching attention back and forth between them, or whether they can monitor spatially distinct streams in parallel. The experiment also compares using semantic categories as the basis for oddball detection (shown here) with trials in which the categories are unknown, but the target words (e.g., “arm” and “leg” in this figure) are known in advance. This comparison highlights different stages of linguistic processing in the brain, so that the cortical interaction between auditory spatial attention and linguistic processing can be observed.

The image of the listener’s head in (A) is the outer skin boundary element model (BEM) from an MRI scan rendered as a triangular mesh using the mayavi library, and passed to matplotlib as an array of RGBA values for plotting as a flat image. Normally the BEM would be loaded into python using mne.read_bem_surfaces(), but in this case the surface has been pre-decimated to reduce file size and saved as an .npz object. The timecourse is pure matplotlib, and the two halves of the figure are combined and lettered using svgutils.

Products

Source

# -*- coding: utf-8 -*-
"""
===============================================================================
Script 'Psychoacoustics trial diagram'
===============================================================================

This script draws a diagram showing the timecourse of a given trial.
"""
# Author: Dan McCloy <drmccloy@uw.edu>
# License: BSD (3-clause)

import os
import numpy as np
import matplotlib.pyplot as plt
import svgutils.transform as svgt
from mayavi import mlab
from subprocess import call
from matplotlib import rcParams, font_manager

#%% # ## ## ## ## ## ## ##
## BASIC PLOTTING SETUP ##
## ## ## ## ## ## ## ## ##
rcp = {'font.sans-serif': ['Source Sans Pro'], 'font.style': 'normal',
       'font.size': 10, 'font.variant': 'normal', 'font.weight': 'medium',
       'pdf.fonttype': 42, 'lines.solid_capstyle': 'round'}
## FONT SPEC
fp_small = font_manager.FontProperties(size=7.5)
fp_hugebold = font_manager.FontProperties(size=48, weight='bold')
## COLOR SPEC
red = ['#DD7788', '#AA4455']
grn = ['#88CCAA', '#44AA77']
## RESET
plt.rcdefaults()
rcParams.update(rcp)

#%% # ## ## ## ##
## TRIAL DATA  ##
## ## ## ## ## ##
## AUDIO DATA (pre-downsampled by factor of 8 for plotting in small space)
audio_dict = np.load('audio-dict.npz')
word_durs = {key: val.size * 8 / 44100. for key, val in audio_dict.iteritems()}
## PARAMETERS FOR A SPECIFIC TRIAL
params = np.load('trial-params.npz')
catnames = params['catnames']
wrdnames = params['wrdnames']
targs = params['targs']
foils = params['foils']
atn = params['atn']
d = params['d']
w = params['w']
x = params['x']
y = params['y']

#%% # ## ## ## ## ## ## ##
## MRI OF HEAD SURFACE  ##
## ## ## ## ## ## ## ## ##
surf = np.load('head-surface.npz')
tris = surf['tris']
pts = surf['pts']
## MAYAVI PLOT OF HEAD SURFACE
head_col = (0.75, 0.75, 0.75)
fig = mlab.figure(size=(800, 600))
mlab.triangular_mesh(pts[:, 0], pts[:, 1], pts[:, 2], tris, color=head_col,
                     figure=fig)
mlab.view(azimuth=108, elevation=17, distance=0.55, roll=272,
          focalpoint=np.array([0.0016, -0.0011, -0.0116]))
## RGBA SCREENSHOT
imgmap = mlab.screenshot(figure=fig, mode='rgba', antialiased=True)
mlab.close(fig)

#%% # ## ## ## ## ## ## ##
## HEAD ANGLES DIAGRAM  ##
## ## ## ## ## ## ## ## ##
fig, head_ax = plt.subplots(1, 1, figsize=(12., 9.))
plt.imshow(imgmap, zorder=4)
xcen = 350.
ycen = head_ax.get_ybound()[0] + np.diff(head_ax.get_ybound()) / 2.
## LINES AND CATEGORY NAMES
theta = np.array([-60, -15, 15, 60]) * np.pi / 180.
radii = [350, 350, 350, 350]
voff = ['bottom', 'bottom', 'top', 'top']
col = ['#000000' if q else '#CCCCCC' for q in atn]
for th, cn, rd, vo, att, cl in zip(theta, catnames, radii, voff, atn, col):
    xx = np.array([xcen, xcen + rd * np.cos(th)])
    yy = np.array([ycen, ycen + rd * np.sin(th)])
    p = head_ax.plot(xx, yy, color='k', linewidth=6)
    q = head_ax.annotate(cn, (xx[1], yy[1]), xytext=(20, 0), ha='left',
                         va=vo, textcoords='offset points',
                         fontproperties=fp_hugebold, color=cl)
head_ax.set_axis_off()
head_ax.set_xlim(160, 820)
plt.tight_layout()
plt.savefig('head-angles.svg')

#%% # ## ## ## ## ## ##
## TRIAL TIME COURSE ##
## ## ## ## ## ## ## ##
## INITIALIZE FIGURE
fig, trial_ax = plt.subplots(1, 1, figsize=(5.5, 1.75))
xlim = (-0.5, 12.5)
## HIGHLIGHT RELEVANT STREAMS
xb = (0, 12.5)
for ix, stream in enumerate(atn):
    if stream:
        _ = trial_ax.fill_between(xb, 4.5 - ix, 3.5 - ix, where=None,
                                  facecolor='White', color='none', alpha=1)
    else:
        _ = trial_ax.fill_between(xb, 4.5 - ix, 3.5 - ix, where=None,
                                  facecolor='#E6E6E6', color='none', alpha=1)
## DRAW GRID
trial_ax.spines['left'].set_position('zero')
trial_ax.spines['left'].set_linewidth(0.5)
trial_ax.spines['bottom'].set_color('none')
plt.hlines(0.5, 0, 12.5, color='k', linewidth=0.5, zorder=4)
trial_ax.spines['right'].set_color('none')
trial_ax.spines['top'].set_color('none')
plt.hlines(np.arange(0.5, 5, 1), 0, 12.5, colors='#CCCCCC', linewidth=0.25,
           zorder=1)
plt.vlines(np.arange(0, 12.75, 0.25), 0.5, 4.5, colors='#CCCCCC',
           linewidth=0.25, zorder=1)
trial_ax.tick_params(axis='both', which='both', bottom='off', top='off',
                     left='off', right='off')  # tick marks off
trial_ax.tick_params(axis='x', which='both', bottom='on', top='off')
trial_ax.set_axisbelow(True)
## DRAW WORD BOXES
for word, xy, dur in zip(w, zip(x, y), d):
    if word in targs:
        c = ['#FFFFFF', grn[1]]
    elif word in foils:
        c = ['#FFFFFF', red[1]]
    else:
        c = ['#777777', '#CCCCCC']
    ## DRAW DURATION RECTANGLES
    _ = trial_ax.fill_between([xy[0], xy[0] + dur], xy[1] - 0.35,
                              xy[1] + 0.35, where=None, facecolor=c[1],
                              edgecolor='none', zorder=2)
    ## DRAW WAVEFORMS
    audio_x = xy[0] + np.linspace(0, dur, audio_dict[word].size)
    audio_y = xy[1] - 0.2 + audio_dict[word] / 0.5
    _ = trial_ax.plot(audio_x, audio_y, color=(0, 0, 0, 0.5), linewidth=0.1,
                      linestyle='solid')
    ## TEXT
    _ = trial_ax.annotate(word, xy, textcoords='offset points', xytext=(1, 3),
                          va='center', ha='left', color=c[0],
                          fontproperties=fp_small, zorder=3)
## GARNISH
trial_ax.tick_params(length=0)
_ = trial_ax.set_xticks(range(13))
_ = trial_ax.set_yticks([z + 0.5 for z in range(4)], minor=True)
_ = trial_ax.set_yticks(range(1, 5, 1))
angle_text = [u'−60°', u'−15°', u'15°', u'60°']
_ = trial_ax.set_yticklabels(angle_text)
_ = trial_ax.set_xlabel('time (s)')
_ = trial_ax.xaxis.set_label_coords(6.25, -0.25, transform=trial_ax.transData)
_ = trial_ax.set_xbound(xlim)
_ = trial_ax.set_ybound(0.5, 4.5)
## INTERMEDIATE FINISH
plt.tight_layout(pad=0.75)
plt.draw()
plt.savefig('trial-timecourse.svg')

#%% # ## ## ## ## ## ## ## ## ## ## ## ## ##
## create new SVG figure; load subfigures ##
## ## ## ## ## ## ## ## ## ## ## ## ## ## ##
fig = svgt.SVGFigure('7in', '2in')
fig1 = svgt.fromfile('trial-timecourse.svg')
fig2 = svgt.fromfile('head-angles.svg')
## GET PLOT OBJECTS
plot1 = fig1.getroot()
plot2 = fig2.getroot()
## POSITIONING
plot1.moveto(130, 21, scale=1.25)
plot2.moveto(0, 34, scale=0.135)
## ADD TEXT LABELS
txt1 = svgt.TextElement(4, 15, 'A)', size=14, font='Source Code Pro')
txt2 = svgt.TextElement(140, 15, 'B)', size=14, font='Source Code Pro')
## APPEND PLOTS AND LABELS TO NEW FIGURE
fig.append([plot2, plot1, txt1, txt2])
fig.save('trial-diagram.svg')
## CONVERT SVG TO PDF AND CLEAN UP INTERMEDIATE FILES
call(['inkscape', '-f', 'trial-diagram.svg', '-A', 'trial-diagram.pdf'])
os.remove('head-angles.svg')
os.remove('trial-timecourse.svg')
os.remove('trial-diagram.svg')

Table Of Contents

Previous topic

Entry 4

Next topic

Entry 6

This Page