Assessment Database Meeting
Purdue University
December 6, 2007
Notes taken by Mark Urban-Lurain and Jenni Momsen
In attendance: Matt Ohland & Russell Long, Diane Ebert-May, Mark Urban-Lurain, Jenni Momsen
- Review of what Matt and Russell have been doing.
- Engineering programs are looking to advance their program scores.
- Matt says not necessarily case that need IRB locally. Some of their participating institutions delegated to Purdue’s IRB.
- Now have 9 institutions. Have issues with limiting queries.
- U of S. Carolina had institutional assessments stored in warehouse.
- Staggered levels of data access. Public views. Institutions have complete access to their own data. MIDFIELD runs reports on the data but have no reason to do reports at the individual data. Look at interactions (e.g., if single female vs more females in courses).
- Goal of MIDFIELD: reports on the participating institutions. Long-term service mode: benchmarking data to institutional members. Get peer studies that are very informative.
- MIDFIELD reports aggregate data, which differs from our goal of providing data at the individual level.
- 1996 data from Foundation Coalition, included individual student data but not to the class level: GPA, total credits, aggregated to term level.
- Participation at institutional level, not faculty. Get high quality, vetted data from registrar sources (fewer ‘touches’). Data not just from engineering but from all students (undergraduates only) who attended since 1987. Batch data uploads on semester basis.
- Defined formats that they prefer, but get whatever the school sends. Each school has unique terms/codes for them. Each institution has established formats and frameworks. Some are flat, some are relational.
- Demographic data are fixed when student matriculates into institution.
- Data validation and self-consistency is important (e.g., check GPA against each class score). ASEE has datamining that they use to validate the data.
- Derived data. They calculate and store, will do some dynamically.
- Ideas and suggestions for the FIRST Assessment Database
- Suggests generating snapshots periodically so that reports are against frozen data.
- Keeping bad (assessment) items would be useful as long as flag them so people can recognize them. What about traffic light indicators of question value/goodness?
- Search for info on Pellegrino on AP content specifications (the AP is in the process of redoing course content and exams. Aligning introductory college science courses with AP goals and assessments could be important). Matt can send the slides from Pellegrino’s recent talk at Purdue.
- Everyone who touches database has to be on the IRB. Could be an issue if we have distributed data. Everyone may need IRB training and FERPA training. Need IRB for anyone downloading data.
- Need plan for data protection not only on own server but on distributed servers.
- How people will use the data will influence the solution to the IRB challenge.
- When are snapshots needed? Only at the end of the semester?
- Higher level administrators more likely to get this than faculty.
- Think about output – focus on the use cases.
- For the future:
- Invite Russell to the May meeting at KBS.
- We should meet with Gregor Novak, perhaps via Marratech.
|