For the past 15 years, AMMACHI labs has been training women in several career fields to enhance their skills, tools and market reach. This research project complements this effort by providing neurocognitive data-based insights into the performance of dexterous vocational skills, particularly with respect to tailoring skills.
We propose to develop a multimodal system architecture that blends the subdomains of cognitive science and artificial intelligence (AI). We obtain multimodal data from a variety of sources and analyze that data to better understand what the key elements that characterize a particular skill are and how these key elements of a skill are effectively communicated. Effective transfer refers to both the teaching and learning aspects of skills. To achieve this, we have identified three main components that form the structure of the proposed work.
- Capture and identify the components of human skills, both physical and cognitive, to accurately determine a learner’s skill level.
- Developing a model to understand barriers to learning new skills based on cognitive load theory (CLT)
- Based on the above understanding, we design cognitive training (CT) strategies to provide learners with effective instruction for continuous skill improvement.
Two main goals of our research and development program fuel our work. The first goal is scientific. We want to use technology to better understand, understand, precision, and refine skill development. And we hope that what we learn on the job will serve as a platform for developing richer skills development models. The second goal is social. We want to use these scientific understandings to accelerate skills development and enable those who are taught the skills to become meaningfully employable.
Project background and motivation
A central problem is the lack of accessible and effective practical vocational training, particularly in manual dexterity-based skills, for rural, illiterate and marginalized communities. This severely limits employment opportunities for women and informal sector workers, who make up a large portion of India’s unskilled labor force. Through more than a decade of work across 21 states in India, AMMACHI Labs has observed that existing training methods are often inaccessible, inconsistent, and poorly aligned with learners’ cognitive and physical abilities.
The broader challenge is to design technology-based interventions that not only teach vocational skills but also increase accuracy and understanding of skill development trajectories. This involves two important issues:
- We develop a multimodal system architecture to capture and classify skill levels using target skill performance data obtained through new technologies and qualitative data from ethnographic research.
- Design cognitive training interventions based on validated learning models to effectively support cross-domain skill learning for learners at different skill levels.
Addressing these challenges will enable the creation of a scalable, AI-driven training ecosystem that supports personalized instruction, drives behavior change, and accelerates the acquisition of certifiable skills, ultimately closing critical workforce development gaps.
Our solution is a multimodal, AI-driven cognitive training system that enhances job skill learning through:
- Psychometric measurement of skill performance using standardized dexterity tests.
- Machine learning-based skill classification and modeling using wearable sensors enable fine-grained behavior classification.
- Developing custom datasets and AI models to detect microactions related to skill performance.
- EEG- and psychometric-based measurements of cognitive load, attention, working memory capacity, schemas, motivation, metacognition, emotional states and their interaction with memory systems (sensory, task, long-term) during skill training.
- Developing cognitive training (CT) interventions tailored to skill level and learner profile.
Our intervention targets both the teaching and learning components of vocational education and includes cloud computing of hardware tools, data collection devices, and deep learning models.
- AI models are trained to differentiate skill patterns across proficiency levels from novice to expert, allowing for targeted feedback and automatic classification of skill levels.
- A closed-loop system is designed where data captured from learners informs ML models and generates insights for instructional redesign, thereby creating a continuously improving AI-in-the-loop learning ecosystem.
- The analytical framework and model can be extended to other occupational areas that require complex psychomotor skills (e.g., surgery, manufacturing, construction), making it a cross-domain platform technology for skills assessment and development.
- The system is designed to create dynamic AI-based skill progression maps by incorporating measurements of cognitive load and learner understanding over time. This helps instructors and learners identify plateau points and risks of overload, allowing for adaptive intervention in real-time.
Project leaders and collaborators
Lead agency:
- AMMACHI Labs – Amrita Vishwa Vidyapeetham
cooperation partner:
- Dr. Veena A. Nair – Professor, University of Wisconsin-Madison
Team members/roles:
- Arjun Venugopal – AMMACHI Labs Research Assistant
- Kancherla Yeswanth Chowdary – AMMACHI Labs Machine Learning/Deep Learning Junior Researcher
field
Cognitive science, skill development
Funding amount
₹ 35.41 million INR
Project duration
From March 19, 2024 to February 19, 2028
Project implementation and main activities
Toward goal 1, we are assessing differences in psychomotor skills in a manipulative dexterity test between experts and novices.
- The tailoring profession was selected as the occupational background of interest.
- We conducted a detailed task analysis that identified 31 psychomotor and cognitive skills related to tailoring, based on the Standard Occupational Classification (SOC) classification framework. Of these, five major abilities related to dexterity were selected for scientific study.
- We use motion capture devices to collect objective movement data during task execution for fine-grained kinematic analysis.
Future planning and scalability
The next stage of this research will use EEG-based measurements to gain insight into the cognitive demands of skill performance and further reveal how experience influences cognitive processing. Furthermore, we will look at the effect of cognitive training on participants’ cognitive load while performing the task.
