Quill.org joins $2.8 million AI Literacy Assessment Initiative

AI News


Quill.org, non-profit organization A.I.-The company, which powers writing tools used by millions of students, is partnering with Lean Lab Education and Learning Commons on a $2.8 million initiative to assess whether AI-generated reading and writing content meets research-backed classroom standards.

The initiative follows comments shared on LinkedIn by Quill's CEO, who noted the inconsistent quality of AI tools currently being sold to schools.

Quill.org provides free literacy and writing tools that educators use to support student practice and feedback at scale. Leanlab Education works with schools to test educational technology in real classroom environments. Meanwhile, Learning Commons is an organization supported by the Chan Zuckerberg Initiative that is building an open infrastructure to connect learning science and AI development.

CEO says quality varies across AI classroom tools

In a post shared on LinkedIn, Peter Gault, executive director and co-founder of Quill.org, positioned the partnership as a response to the persistent gap between what AI tools promise and what they actually deliver. “Quill.org partners with Leanlab Education and CZI’s Learning Commons as part of a $2.8 million initiative to improve AI-powered learning tools.”

He added that while AI has the potential to reduce workload and increase the frequency of feedback, current tools vary widely in rigor and usefulness for teachers and students. “AI-powered tools can support teachers in providing more frequent and detailed feedback, but only if those tools are rigorously evaluated against high-quality standards.”

Focus on public datasets and evaluation infrastructure

The funding will support three related projects focused on building shared assessment tools for AI literacy products. According to Learning Commons, this work will focus on creating public datasets, assessment protocols, and raters that measure AI-generated feedback and reading material against trusted educational rubrics.

“Teachers need reliable classroom tools that deliver high-quality, rigorous content,” said Sandra Liu Fan, President of Learning Commons. “The tools must deliver content at the appropriate grade level, tailored to each student's needs and based on solid learning science that supports student growth.”

This grant aims to address long-standing challenges in the classroom, such as the time required to provide detailed writing feedback and the difficulty of matching reading materials to students' developmental levels. Stakeholders argue that AI tools often produce generic or repetitive output that does not meet instructional needs.

Center for Classroom Testing and Literacy Standards

As part of this effort, Quill and Leanlab Education will develop research protocols and a large open dataset of anonymized student writing annotated by researchers to reflect effective feedback practices. This dataset is designed to help developers test whether their AI tools are consistent with evidence-based writing instructions.

Katie Boody Adorno, Founder and CEO of Leanlab Education. “Our proximity to schools, students, and educators, combined with our rigorous research and development approach, allows us to work with school communities to ensure we design the tools of the future.”

The third grant will expand the text complexity assessment tool developed in collaboration with Student Achievement Partners to allow AI tools to assess whether the generated reading texts meet qualitative and quantitative expectations for grades 3-12.

“Student Achievement Partners is proud to be a leader in student achievement,” said Joy DeLizzo Osborn, president and CEO of Student Achievement Partners. “This research turns the entire qualitative text complexity rubric into a transparent, machine-scorable standard, allowing AI tools to be evaluated against research-backed expectations so teachers can trust that the texts and recommendations they receive actually enhance understanding.”

All datasets, protocols, and evaluation tools created through this effort will be publicly available.

ETIH Innovation Award 2026



Source link