Ecoinformatics site parent site of Partnership for Biodiversity Informatics site parent site of FIRST - Home


 

 

 



Purdue Meeting

This is version 1. It is not the current version, and thus it cannot be edited.
[Back to current version]   [Restore this version]


Assessment Database Meeting Purdue University December 6, 2007 Notes taken by Mark Urban-Lurain and Jenni Momsen

In attendance: Matt Ohland & Russell Long, Diane Ebert-May, Mark Urban-Lurain, Jenni Momsen

  1. Review of what Matt and Russell have been doing.
    1. Engineering programs are looking to advance their program scores.
    2. Matt says not necessarily case that need IRB locally. Some of their participating institutions delegated to Purdue’s IRB.
    3. Now have 9 institutions. Have issues with limiting queries.
    4. U of S. Carolina had institutional assessments stored in warehouse.
    5. Staggered levels of data access. Public views. Institutions have complete access to their own data. MIDFIELD runs reports on the data but have no reason to do reports at the individual data. Look at interactions (e.g., if single female vs more females in courses).
    6. Goal of MIDFIELD: reports on the participating institutions. Long-term service mode: benchmarking data to institutional members. Get peer studies that are very informative.
    7. MIDFIELD reports aggregate data, which differs from our goal of providing data at the individual level.
    8. 1996 data from Foundation Coalition, included individual student data but not to the class level: GPA, total credits, aggregated to term level.
    9. Participation at institutional level, not faculty. Get high quality, vetted data from registrar sources (fewer ‘touches’). Data not just from engineering but from all students (undergraduates only) who attended since 1987. Batch data uploads on semester basis.
    10. Defined formats that they prefer, but get whatever the school sends. Each school has unique terms/codes for them. Each institution has established formats and frameworks. Some are flat, some are relational.
    11. Demographic data are fixed when student matriculates into institution.
    12. Data validation and self-consistency is important (e.g., check GPA against each class score). ASEE has datamining that they use to validate the data.
    13. Derived data. They calculate and store, will do some dynamically.
  2. Ideas and suggestions for the FIRST Assessment Database
    1. Suggests generating snapshots periodically so that reports are against frozen data.
    2. Keeping bad (assessment) items would be useful as long as flag them so people can recognize them. What about traffic light indicators of question value/goodness?
    3. Search for info on Pellegrino on AP content specifications (the AP is in the process of redoing course content and exams. Aligning introductory college science courses with AP goals and assessments could be important). Matt can send the slides from Pellegrino’s recent talk at Purdue.
    4. Everyone who touches database has to be on the IRB. Could be an issue if we have distributed data. Everyone may need IRB training and FERPA training. Need IRB for anyone downloading data.
    5. Need plan for data protection not only on own server but on distributed servers.
    6. How people will use the data will influence the solution to the IRB challenge.
    7. When are snapshots needed? Only at the end of the semester?
    8. Higher level administrators more likely to get this than faculty.
    9. Think about output – focus on the use cases.
  3. For the future:
    1. Invite Russell to the May meeting at KBS.
    2. We should meet with Gregor Novak, perhaps via Marratech.



Go to top   More info...   Attach file...
This particular version was published on 17-Dec-2007 06:57:16 PST by uid=momsenj,o=MSU.