February 1, 2008 FIRST Assessment Database meeting
In attendance: Gregor Novak (Air Force Academy), Diane Ebert-May, Mark Urban-Lurain, Jenni Momsen
Novak goals: JITT
They are using oncourse from Indiana U for students to upload their responses. Most faculty then manually extract materials from the responses to use as basis of class activities.
Goal: Gain insight from Gregor on our database. What are his ideas/comments/ suggestions? Mark explained the purpose of our tool to Gregor. Gregor’s database has some student responses. Our database is broader than that – would include some summative data (student data, that is). Gregor is certainly very interested in the focus of our database.
On Gregor’s hosting site – you can enter some keywords and gain limited response information. His database doesn’t have any way to query by types of students. He would like student demo data and student responses. Only has course and student responses.
Gregor had an idea for his database (which wasn’t realized): Constructing labels for teaching techniques in the classroom that would go with specific results from pre-instruction assessments. Gregor would have metadata associated with all the pre-classroom activities (for these students, I did this activity because their pre-assessment said this, see how the post-assessments compare). Could give different activities to different students (based on research).
Would like a way to compare similar students (a student who dropped out, went to military, then returned, for example).
From Gregor: Could there be an identifier on student responses so we could track them? Would like to anonymize students, but still keep all the demographic and metadata.
There is some reasonable overlap in the goals between our two databases.
IRB/FERPA – not been a problem for Gregor since they haven’t gone ‘deep enough.’ Database collected: course name, course data, instructor, and student responses to questions (student’s are de-identified, so they can track student responses across questions). No metadata, which is a definite flaw.
Gregor asked: how are you going to get faculty to upload data? If you build it, will they come? That’s something we’re thinking about – what’s the hook for faculty. It’s easiest to get stuff if you give them a word template (going to a website, even going to upload things, is problematic). Gregor uses apple script to sort data and massage it (from word uploaded files).
Gregor could provide sample data and assessments.
What do we want from Gregor? We want to make sure we are collecting all the sorts of data he would be interested in storing in the database.
How could I use selected responses to design a JiTT experience? Using response data for customized instruction.
Funding: Gregor’s grant ends this March. He has private funding from an alumnus to continue this work. Has funding for a programmer. Customizing the database and hosting to the Air Force academy. Any innovations from the local level are ported to the national level (this is funding). Put in new grant with NSF to look at whether there is deep learning with JiTT (targeting visualization. Looking at graphing with physics and statistics. Targeting how students visualize and interpret graphs. Collaborating with econ, physics and statistics graphing Looking a pedagogic techniques that impact learning. Has some papers that are being put up on his wiki.)
Gregor is going to send us a wiki link (which we need to link to from our own Wiki), the text of his recent REESE grant, the text of his JiTT lesson.
His metaphor is mentoring on large scale. Customize instruction for the set of students.
Many of his questions surprise other faculty. Why would you ask that? Think questions are vague because they are probing questions.
Mike Gerden at UM has talk on upside down triangle.
Gregor suggested going for an NSF Sugar grant??
This material is based upon work supported by the National Science Foundation under award 0618501. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).