NSF SGER Project
Affinity Learning Authoring Tools
The
NSF
Intellectual Merit Criteria
The project will further develop and apply
interactive, valid and reliable software tools that assess student progress in
STEM education and provide meaningful feedback enabling instructors to better diagnose
and resolve learning difficulties.
NSF
Broader Impact Criteria
The project will make contributions to the literature
related to the application of innovative, computer-software based assessment
tools and practices and will share results and products with other researchers and
STEM educators.
Project
Goal
1. To
continue exploratory research on Affinity as a generalized research, learning
and assessment software tool for STEM education.
Project
Objectives
1. To
design and pilot-test an authoring interface that provides a graphical
representation of lesson components and allows abstraction or information
hiding to mitigate the complexity inherent in the Affinity Learning design
process
2. To
overcome intrinsic challenges in lesson decomposition by investigating new conceptual
approaches such as basing decomposition on the identification of student misconceptions
Challenges
to be Overcome
The complexity of authoring continues to be a
challenge as Affinity is applied in additional teaching and learning
environments. Establishing the structure of Affinity lessons takes two major
steps: Decomposition and Interconnection. The instructor must first decompose a
lesson into the set of learning nodes. At the same time, he or she must anticipate
potential misconceptions and add nodes to respond to the misconceptions. The
initial decomposition has proven a significant challenge for new authors. It is
very difficult for even experienced STEM educators to decompose their lessons
into the small “learning nodes” that drive Affinity.
Teachers have automated their teaching skills through
experience. Significant effort must be expended to capture the steps they take,
how they recognize misconceptions and how they apply prescriptions.
The current authoring approach involves creating the course
in the classical order of presentation. Following the research of Minstrell, it
has been suggested that the process be inverted; the author first enumerating
misconceptions and then constructing learning nodes and structure to avoid them
(Minstrell, 1989; Kraus and Minstrell, 2003). Misconceptions could be identified
through instructor experience, focus groups, or experimentation with students.
The second step, establishing an interconnected
network for the nodes, also presents challenges for all but simple subjects. As
the breadth and difficulty of the topic covered increases, the network
interconnecting the learning nodes becomes increasingly complex. Keeping track
of all of the nodes, and the conditions that lead to their selection, has been
shown to be a formidable task. Instructors have difficulty maintaining a mental
image of the network of nodes, the purposes of interconnections, and the network
structure in general. An authoring interface that includes a graphical method
for representing and manipulating nodes and their interconnections must be
created for Affinity. A method of hiding information also is needed for the
system.
Hiding information is a method of abstraction so that
large assemblies of nodes can be organized into groups that can be manipulated,
replicated, and reused in mass.
Current
Status
Thus far, we have built two authoring tools: (1)
activity builder graphical tool, and (2) activity builder web-based tool. We are currently shifting our development
focus onto the activity builder graphical tool and this will be our primary
authoring tool henceforth. Interested
readers please refer to here
for the user manual and please contact us if you
would like a copy of the software.
We have also conducted a classroom test in February
2006 and are now analyzing the results.
The test, IRB-approved, had 30 student subjects. 15 of the students used the activity builder
web-based tool to build concept hierarchies on polynomials, while the other 15
used the graphical tool. Both groups
took pre- and post-tests. Our objectives
were to (1) measure the difference, if any, between the two groups in post-test
performance—whether the use of different tools made a difference in their
understanding of the concept hierarchies in another topic, and (2) measure the
difference, if any, between the two groups in terms of the quality of the
concept hierarchies built.