Thank you for visiting my research preview.
My name is Dr. Geqigula (GQ) Dlamini, and in 2025, I led the first* participatory action research study in the United States that combined empowerment theory and generative AI adoption among state government employees. I designed a study with 11 colleagues, who became co-researchers, to explore how AI could support programs serving over 1 million multilingual students.
The frame was simple: Geneative AI was a people story, not a technology story.
The results: 89% felt more empowered, 100% shifted to a more positive attitude toward AI, and 78% were still using the tools three months later. This research proves that how we introduce AI matters as much as what AI can do—and that when public servants hold research authority over technology affecting their work, they don't just adopt it, they champion it.
My full dissertation is now posted and available on Scholarly Commons.
The Research Preview
The research preview, based on my doctoral dissertation, available here, includes an audio overview and two applications that let you explore the findings and take a deep dive into the data.
Empowering State-Level Multilingual Education Leaders
A Participatory Action Research Study on Generative AI Adoption in State Government
The Research Problem
Understanding the constraints facing state-level multilingual education programs and the opportunity generative AI presents
Fiscal Constraints
The state's budget required significant general fund reductions across all department operational budgets, with additional elimination of vacant positions. Fiscal analysts projected continued budget pressures, limiting new investments in program capacity.
Growing Demand
Over 1 million English learner students (17% of enrollment) require specialized services, with ambitious state goals for multilingual program expansion. Program staff needed to maintain momentum despite resource limitations.
Workforce Disruption
Reports suggest 30% of current work hours could be automated by 2030, and 44% of jobs are expected to change due to AI. Public sector workers need proactive preparation rather than reactive displacement.
How can generative AI empower program staff in implementing state-level multilingual programs?
Theoretical Framework
Empowerment theory guided this inquiry, examining both psychological and structural dimensions
Structural Empowerment
Organizational-level processes enabling collective decision-making, shared leadership, and systems that enhance members' capacity to effect change
Psychological Empowerment
Individual-level beliefs that goals can be achieved, awareness of resources, and efforts to fulfill goals through enhanced self-efficacy
Contextual Integration
Creating an organizational context that allows employees to determine whether to explore or exploit depending on their operational needs—putting decision-making power in the hands of those closest to the work.
— Selten & Klievink (2024)
Ecological Empowerment
Understanding the relationship between individuals and their community/environment. Empowerment is not a scarce resource—it grows when shared through knowledge transfer and collective learning.
— Rappaport (1987)
Public Sector Entrepreneurship
The 3Rs Framework: Renewal (testing new ideas), Resilience (persisting despite constraints), and Resourcefulness (leveraging existing resources creatively).
— Vivona et al. (2024)
Methodology
A qualitative participatory action research study with a state education agency program division
Planning & Training
Co-researchers received state-approved training on generative AI, its capabilities, and potential applications. They identified at least two use cases aligned with their work assignments, forming hypotheses about where AI could be helpful.
Action & Observation
Over one month, co-researchers tested their hypotheses, documenting observations and learnings. They experimented with multiple platforms including Microsoft Copilot, ChatGPT, Claude, NotebookLM, and MagicSchool AI.
Reflection & Sharing
Communities of practice provided space for collective learning. Co-researchers shared successful and unsuccessful use cases, with many pivoting based on peer insights. Final reflections documented enduring learnings.
Longitudinal Follow-Up
Three months post-study, a follow-up survey assessed sustained AI use, peer influence, and continued empowerment—confirming the durability of study outcomes.
Governance & Risk Mitigation
Establishing guardrails through an internal advisory workgroup
Public agencies operate within institutional frameworks that require careful navigation. This study established an Internal Departmental Advisory Workgroup to provide oversight, ensure policy compliance, and identify potential risks before they materialized.
Advisory Committee Composition
Cross-functional representation ensured comprehensive risk assessment
Committee Role & Function
- ✓ Provided departmental oversight to ensure adherence to policies
- ✓ Reviewed all proposed use cases before implementation
- ✓ Identified potential breaches of department protocol
- ✓ Guided discussions around protecting sensitive data
An intentional, thoughtful approach to protect the public interest while enabling innovation
Data Protection
Enterprise AI platforms used for sensitive data to ensure information was not used for model training
Zero Additional Cost
Study designed using only existing resources and free-tier AI tools available to all staff
Workflow Alignment
All exploration integrated into existing work assignments rather than added as separate workload
Use Case Review
Initial use cases shared with advisory committee to determine any immediate concerns before testing
Advisory Committee Outcome
After reviewing all proposed use cases, the advisory committee identified no immediate concerns, allowing co-researchers to proceed with their explorations while maintaining appropriate guardrails around data protection.
To protect each vested interest and any commitments the internal partners had made, creating an opportunity to guide the study seemed the most appropriate approach. The committee provided departmental oversight of the study to ensure it adhered to departmental policies regarding data, personnel, and technology use.
— Dissertation Methodology
Key Findings
Four major themes emerged from reflexive thematic analysis
I hadn't used AI at all. I've read a lot about it, heard people have conversations. And in my mind, I thought it would be more difficult or complex to learn how to use and so the barrier to my entry was really my imagination making it really difficult… And so this just got me going. I feel like a kid who just learned how to ride a bike.
— Co-researcher, Final Community of Practice
Pre-Study Post-Study
I never really thought about what I could do in my day-to-day tasks using AI.
— Co-researcher, Post-Study Survey
I think the case use examples were the most powerful. It's hard to imagine all the possibilities of AI, especially with limited understanding, but with the various case use examples that the team brought forward, it helped me imagine and brainstorm ways in which I could use it for my specific duties.
— Co-researcher, Post-Study Survey
I was the first in my office to use AI in my workflow. I was also able to share several uses with my team that they were able to replicate. I feel that I will be part of any discussion in my team about using AI for our work because of my experience.
— Co-researcher, Follow-Up Survey
After using AI, I see that it is not an instantaneous fix, making all work more efficient. AI can be used in a few select instances for my duties, but not all, and the time investment to get an end product is a bit time-consuming. While I can see myself using it sometimes, it won't be as seamless as I originally thought.
— Co-researcher, Post-Study Survey
I still have concerns about the environmental impact of using AI (i.e. the amount of water it takes to cool the systems). I want that to be addressed.
— Co-researcher, Post-Study Survey
All of my use of AI has happened from participating in the study.
— Co-researcher, Follow-Up Survey
Discovered Use Cases
Co-researchers identified 22 use cases across 6 thematic categories
Regulatory Document Analysis
Extracting answers from complex reference documents including federal regulations and state administrative manuals for programmatic questions.
School Plan Summarization
Summarizing school-level planning documents during federal monitoring, querying for English learner-specific information with citations and page numbers.
ELD Standards Visualization
Illustrating differences between integrated and designated English Language Development with concrete examples integrated into history lessons.
Leadership Talking Points
Developing talking points for senior leadership—reducing creation time from "agonizing all day" to under 30 minutes with approximately 30% editing required.
ELD-Integrated Lesson Plans
Developing lesson plans with integrated ELD standards for migrant education grantees, breaking into unit plans with culturally sustaining pedagogy.
Grant Reporting Scoring
Generating preliminary "met/did not meet" ratings for grantee reporting using rubrics—discovered AI was more lenient than human reviewers, prompting bias reflection.
Compliance Notification Clarity
Reviewing notification of findings language to ensure explanations and expectations are clear—"The more complex the finding, the greater the potential for improvement."
Grantee Email Professional Tone
Ensuring email communications maintain clarity and professionalism—resulted in fewer questions from grantees, indicating greater understanding.
Desk Manual Modernization
Restructuring lengthy desk manuals into navigable formats—using AI-generated outlines to update legacy documentation.
Task Checklists from Procedures
Converting comprehensive procedure documents into actionable checklists for easier onboarding and role transition.
It consistently gave me kind of exactly what I needed... I eventually started using it on other documents.
— Co-researcher 4, on document summarization use case
Instead of agonizing over talking points all day long, I was able to extract good talking points and make them better in 15 to 30 minutes.
— Co-researcher 8, on content generation
Peer Influence & Advocacy
From help-seekers to help-givers: Co-researchers became knowledge champions
Pre-Study State
Post-Study State
3-Month Follow-Up
My colleagues and I often share how we have used AI to streamline our workflow. I appreciate how much more productive they have become as a result of these conversations and others willing to learn.
— Co-researcher, Follow-Up Survey
I have found that I have become a strong proponent of AI. I think that there is certainly a place for it in public agencies. I just need to help people understand what AI really is (i.e., a tool) and what it is not (i.e., a replacement for human thought).
— Co-researcher, Follow-Up Survey
Recommendations
For practice and future research based on study findings
🏛️ For Practice
- Encourage all staff to take advantage of state-provided AI training resources that are available at no cost
- Develop guidance on case studies and examples of responsible and allowable use with detailed, role-specific examples
- Create interactive communities of practice or peer learning sessions lasting several weeks, not just single training events
- Identify AI champions willing to act as facilitators and mentors on responsible AI use
- Address privacy, ethical, and environmental concerns about generative AI directly through leadership discussions
🔬 For Future Research
- Expand research to include staff from other areas such as curriculum, student achievement, and operational teams
- Explore advanced generative AI use cases requiring more substantial investment (bespoke applications, chatbots, compliance tools)
- Work with other state-level education agency departments to understand cross-organizational approaches to AI training
- Apply additional theoretical frameworks beyond empowerment theory for different organizational contexts
- Examine structural integration approaches for organizations with more unified workloads
Generative AI is a people story, not a technology story. The power to determine how AI will be helpful belongs in the hands of those closest to implementing the programs.
Deep Data Dive
Tracking the empowerment journey through survey data across Pre-Study, Post-Study, and 3-Month Follow-Up phases
Tracking sentiment shifts from skepticism to advocacy
📊 Pre-Study (n=9)
📈 Post-Study (n=9)
🌟 3-Month Follow-Up (n=9)
Key Insight: 100% positive attitude post-study; 89% sustained at 3-month follow-up despite one researcher citing environmental concerns
Familiarity with GenAI Tools
Confidence in Using AI Tools
Note: 100% confidence (very + somewhat) maintained across all three phases
From occasional experimentation to daily workflow integration
How perceptions of AI value shifted through hands-on experience
| Benefit | Pre-Study | Post-Study | Change |
|---|---|---|---|
| Increased efficiency & productivity | 100% | 78% | ↓ 22% |
| Creative content generation | 78% | 89% | ↑ 11% |
| Automating repetitive tasks | 78% | 44% | ↓ 34% |
| Enhanced learning & skills development | 78% | 44% | ↓ 34% |
| Improved decision-making & insights | 44% | — | N/A |
Key Insight: Creative content generation emerged as the top benefit post-study (89%), while expectations for automation decreased after hands-on experience
What worried co-researchers before they began exploring
Post-Study Concern Changes
How training and exploration addressed key challenges
77% reported improved workflow effectiveness and efficiency
The Empowerment Journey: By the Numbers
Empowerment
Shift
AI Users
Advocates
Effectiveness
*A comprehensive search of ProQuest Dissertations & Theses, Google Scholar, Government Information Quarterly, Public Administration Review, and state government AI initiative databases conducted in January 2026 identified no comparable studies.
Full dissertation will be posted in early 2026.

