Hi!
I want to implement diagnostic assessment as dynamic, adaptive questioning and now in search of tools, frameworks, and approaches for the implementation
Concept overview:
- Question pool: I have a set of N questions, each assigned a specific difficulty level.
- Adaptive logic: The test begins with a question of medium difficulty. After each response, the system updates the student's proficiency level and selects the next question to ensure there's a 50% probability the student can answer it correctly. If the student answers correctly, the subsequent question becomes more challenging; if not, it becomes easier.
Key Functionality:
- All questions and their difficulty levels are stored in an external database.
- The form dynamically generates the next question immediately after the current one is answered, based on the updated student level.
Goal:
I have programming experience and aim to design a solution that functions well as an MVP and is scalable for future enhancements.
Question:
I'm planning to integrate Typeform with an external Python script to manage the adaptive logic. What tools or frameworks would you recommend for this? Are there any effective alternative approaches? If anyone has experience with similar projects or suggestions on best practices, I would really the feedback))
Thank you in advance for your help!