Supplemtrary Coding Blogpost
Dimension 2: Touch
Dimension 3: Gestures
Touch and gestures are coded in ELAN, in several steps – from identification to annotation and labelling. This supplementary coding blog outlines these coding steps and should be used alongside the multimodal parent and infant behaviour coding manual (Siew et al., 2021), as well as coded video examples (and only after you feel competent with ELAN).
The coding steps outlined in this supplementary coding blogpost include:
1. Coding file set-up: annotation mode
2. Identifying touch or gestures
3. Coding touch or gestures
3.1. Evaluating interaction windows
3.2. Labelling interaction windows
4. Nested coding
Note, gestures and touch follow the same coding format; thus this supplementary coding blogpost does not distinguish between different coding formats for these two behaviours. However, the ELAN templates used to code gestures and touch do differ in terms of label menus (see section 3.2).
1. Coding file set-up: annotation mode
First, the coding file should be opened in the correct coding mode – i.e., annotation mode. This is the mode whereby the majority of coding will take place (see Figure 1, below).
Figure 1: Image depicting the annotation mode
2. Identifying touch or gestures
Within the annotation mode, touch or gestures are identified by selecting and playing short intervals at a time (e.g., 10-seconds). This is important since many gestures can occur within a few seconds of the interaction; thus, focusing on short intervals increases the detection rate.
Starting from the beginning of the video, select short 10-second intervals of the interaction by dragging the media crosshair from left to right and then play that interval. Next, visually identify any touch bouts or gestures which may be present. Typically in a short interval, only a few target behaviours may be present. Thus, take a mental note of approximately where they occur, or note down the time. If no touch or gesture is identified, select the next interval.
Figure 2: Image depicting the crosshair area and selection media controls.
3. Coding touch or gestures
Once a touch bouts or gesture has been visually identified in the interaction, this is subsequently labelled according to specific touch or gesture types. This happens within coding windows (see Figure 3). Each coding window represents a 1-second block of the interaction. Thus, each window surrounding the touch bout or gesture needs to be individually evaluated to determine if the a touch bout or gesture is present in each window, and then subsequently labelled. These steps are outlined below.
3.2. Labelling 1-second interaction window
Once an interaction window has been evaluated and the presence or absence of a touch bout or gesture has been established, the interaction window is subsequently labelled.
To do this, left-click on the interaction window you wish to label (under tier 1, unless the coding is nested – see section 4). This will activate the window by highlighting the interval in blue. Next, right click on the activated window to display the ‘annotation menu’. Click on ‘new annotation here,’ and subsequently, select the correct gesture label (see figure 12, below). [shortcut key: ctrl + alt + m]. Continue to the next interaction [shortcut key: alt + left/right], until the entire interaction video has been assessed for the presence of touch or gestures, and subsequently, labelled accordingly.
Figure 4: Image depicting the annotation (L) and labelling menu (M) and coded example (R)
4. Nested coding
Nested coding relates to instances whereby two gesture or touch types, produced by two different hands, overlap in time – e.g., point (right hand) and representational (left hand).
To code instances of nested coding, the first touch or gesture present is coded under tier 1; whereas the second touch or gesture present is coded under tier 2 (see Figure 10, below).
Figure 5: Image depicting nesting coding






No comments:
Post a Comment