Thank you for visiting my research preview.

My name is Dr. Geqigula (GQ) Dlamini, and in 2025, I led the first* participatory action research study in the United States that combined empowerment theory and generative AI adoption among state government employees. I designed a study with 11 colleagues, who became co-researchers, to explore how AI could support programs serving over 1 million multilingual students. 

The frame was simple: Geneative AI was a people story, not a technology story.

The results: 89% felt more empowered, 100% shifted to a more positive attitude toward AI, and 78% were still using the tools three months later. This research proves that how we introduce AI matters as much as what AI can do—and that when public servants hold research authority over technology affecting their work, they don't just adopt it, they champion it.

My full dissertation is now posted and available on Scholarly Commons.

The Research Preview

The research preview, based on my doctoral dissertation, available here, includes an audio overview and two applications that let you explore the findings and take a deep dive into the data.

Empowering State-Level Multilingual Education Leaders | Dr. Geqigula M. Dlamini
Doctoral Dissertation • 2026

Empowering State-Level Multilingual Education Leaders

A Participatory Action Research Study on Generative AI Adoption in State Government

By Dr. Geqigula M. Dlamini | University of the Pacific, Benerd College

89%
Felt More Empowered
100%
Attitude Improved
78%
Still Using AI 3 Months Later
11
Co-Researchers
01

The Research Problem

Understanding the constraints facing state-level multilingual education programs and the opportunity generative AI presents

$46.8B Deficit

Fiscal Constraints

The state's budget required significant general fund reductions across all department operational budgets, with additional elimination of vacant positions. Fiscal analysts projected continued budget pressures, limiting new investments in program capacity.

1M+ Students

Growing Demand

Over 1 million English learner students (17% of enrollment) require specialized services, with ambitious state goals for multilingual program expansion. Program staff needed to maintain momentum despite resource limitations.

30-50% Automation Risk

Workforce Disruption

Reports suggest 30% of current work hours could be automated by 2030, and 44% of jobs are expected to change due to AI. Public sector workers need proactive preparation rather than reactive displacement.

Central Research Question

How can generative AI empower program staff in implementing state-level multilingual programs?

The Context: Serving a Large, Diverse Student Population
Total State Enrollment
5.8M Students
Multilingual Students
2.3M (38%)
English Learners
1M+ (17%)
02

Theoretical Framework

Empowerment theory guided this inquiry, examining both psychological and structural dimensions

🏛️

Structural Empowerment

Organizational-level processes enabling collective decision-making, shared leadership, and systems that enhance members' capacity to effect change

🧠

Psychological Empowerment

Individual-level beliefs that goals can be achieved, awareness of resources, and efforts to fulfill goals through enhanced self-efficacy

Contextual Integration

Creating an organizational context that allows employees to determine whether to explore or exploit depending on their operational needs—putting decision-making power in the hands of those closest to the work.

— Selten & Klievink (2024)

Ecological Empowerment

Understanding the relationship between individuals and their community/environment. Empowerment is not a scarce resource—it grows when shared through knowledge transfer and collective learning.

— Rappaport (1987)

Public Sector Entrepreneurship

The 3Rs Framework: Renewal (testing new ideas), Resilience (persisting despite constraints), and Resourcefulness (leveraging existing resources creatively).

— Vivona et al. (2024)

03

Methodology

A qualitative participatory action research study with a state education agency program division

11
Co-Researchers
33%
Division Participation
22
Use Cases Explored
81%
Survey Response Rate
Phase I

Planning & Training

Co-researchers received state-approved training on generative AI, its capabilities, and potential applications. They identified at least two use cases aligned with their work assignments, forming hypotheses about where AI could be helpful.

Phase II

Action & Observation

Over one month, co-researchers tested their hypotheses, documenting observations and learnings. They experimented with multiple platforms including Microsoft Copilot, ChatGPT, Claude, NotebookLM, and MagicSchool AI.

Phase III

Reflection & Sharing

Communities of practice provided space for collective learning. Co-researchers shared successful and unsuccessful use cases, with many pivoting based on peer insights. Final reflections documented enduring learnings.

Phase IV

Longitudinal Follow-Up

Three months post-study, a follow-up survey assessed sustained AI use, peer influence, and continued empowerment—confirming the durability of study outcomes.

Data Collection Methods
Surveys
4 Surveys (Pre, Use Case, Post, Follow-up)
Communities of Practice
Semi-structured group sessions
1:1 Support Sessions
Individual coaching & brainstorming
Researcher Journal
Reflexive documentation
04

Governance & Risk Mitigation

Establishing guardrails through an internal advisory workgroup

Balancing Innovation with Institutional Responsibility

Public agencies operate within institutional frameworks that require careful navigation. This study established an Internal Departmental Advisory Workgroup to provide oversight, ensure policy compliance, and identify potential risks before they materialized.

🛡️

Advisory Committee Composition

Cross-functional representation ensured comprehensive risk assessment

Human Resources Technology Services Information Security Legal Division Leadership
⚖️

Committee Role & Function

  • Provided departmental oversight to ensure adherence to policies
  • Reviewed all proposed use cases before implementation
  • Identified potential breaches of department protocol
  • Guided discussions around protecting sensitive data
Risk Mitigation Framework

An intentional, thoughtful approach to protect the public interest while enabling innovation

🔒

Data Protection

Enterprise AI platforms used for sensitive data to ensure information was not used for model training

💰

Zero Additional Cost

Study designed using only existing resources and free-tier AI tools available to all staff

⏱️

Workflow Alignment

All exploration integrated into existing work assignments rather than added as separate workload

🎯

Use Case Review

Initial use cases shared with advisory committee to determine any immediate concerns before testing

Advisory Committee Outcome

After reviewing all proposed use cases, the advisory committee identified no immediate concerns, allowing co-researchers to proceed with their explorations while maintaining appropriate guardrails around data protection.

To protect each vested interest and any commitments the internal partners had made, creating an opportunity to guide the study seemed the most appropriate approach. The committee provided departmental oversight of the study to ensure it adhered to departmental policies regarding data, personnel, and technology use.

— Dissertation Methodology

05

Key Findings

Four major themes emerged from reflexive thematic analysis

89%
Felt More Empowered
8 of 9 respondents reported feeling more empowered in their roles with generative AI
100%
Positive Attitude Shift
All 9 co-researchers held "very positive" (56%) or "somewhat positive" (44%) attitudes post-study
100%
More Prepared
All co-researchers felt more prepared to use generative AI after the study
100%
Increased AI Use
All respondents reported using generative AI more frequently in work and personal life

I hadn't used AI at all. I've read a lot about it, heard people have conversations. And in my mind, I thought it would be more difficult or complex to learn how to use and so the barrier to my entry was really my imagination making it really difficult… And so this just got me going. I feel like a kid who just learned how to ride a bike.

— Co-researcher, Final Community of Practice

Attitude Toward AI in the Workplace: Pre vs. Post Study
Very Positive
22%
56%
Somewhat Positive
56%
44%
Neutral/Negative
22% → 0%

Pre-Study Post-Study

I never really thought about what I could do in my day-to-day tasks using AI.

— Co-researcher, Post-Study Survey

78%
Became Knowledge Sharers
7 of 9 co-researchers shared their experiences with colleagues after the study
89%
See AI as More Useful
8 of 9 stated they viewed generative AI as "more useful" in their work post-study

I think the case use examples were the most powerful. It's hard to imagine all the possibilities of AI, especially with limited understanding, but with the various case use examples that the team brought forward, it helped me imagine and brainstorm ways in which I could use it for my specific duties.

— Co-researcher, Post-Study Survey

Support Needed for Effective GenAI Implementation
Access to AI Tools
89%
Hands-on Training
89%
Clear Guidelines & Policies
78%
Case Studies
44%

I was the first in my office to use AI in my workflow. I was also able to share several uses with my team that they were able to replicate. I feel that I will be part of any discussion in my team about using AI for our work because of my experience.

— Co-researcher, Follow-Up Survey

Lack of Knowledge/Understanding ↓ Improved
Pre-Study100%
Post-Study67%
Data Privacy & Ethics Concerns ↓ Improved
Pre-Study78%
Post-Study56%
Insufficient Training/Resources → Slight Change
Pre-Study67%
Post-Study67%
Resistance to Change ↓ Improved
Pre-Study67%
Post-Study56%

After using AI, I see that it is not an instantaneous fix, making all work more efficient. AI can be used in a few select instances for my duties, but not all, and the time investment to get an end product is a bit time-consuming. While I can see myself using it sometimes, it won't be as seamless as I originally thought.

— Co-researcher, Post-Study Survey

I still have concerns about the environmental impact of using AI (i.e. the amount of water it takes to cool the systems). I want that to be addressed.

— Co-researcher, Post-Study Survey

78%
Still Felt Empowered
3 months post-study, 7 of 9 respondents still felt more empowered in their role
78%
Daily/Weekly AI Use
7 of 9 continued using generative AI daily (33%) or weekly (44%)
78%
New Practices Introduced
7 of 9 introduced new generative AI practices into their workflow from the study
77%
Improved Effectiveness
33% "greatly improved" + 44% "somewhat improved" workflow effectiveness

All of my use of AI has happened from participating in the study.

— Co-researcher, Follow-Up Survey

Continued AI Use at 3-Month Follow-Up
Drafting Communications
67%
Brainstorming & Ideation
67%
Summarizing/Editing Docs
67%
Data Analysis & Viz
33%
06

Discovered Use Cases

Co-researchers identified 22 use cases across 6 thematic categories

Research & Analysis

Regulatory Document Analysis

Extracting answers from complex reference documents including federal regulations and state administrative manuals for programmatic questions.

🔧 NotebookLM, ChatGPT
Research & Analysis

School Plan Summarization

Summarizing school-level planning documents during federal monitoring, querying for English learner-specific information with citations and page numbers.

🔧 Microsoft Copilot
Content Generation

ELD Standards Visualization

Illustrating differences between integrated and designated English Language Development with concrete examples integrated into history lessons.

🔧 Copilot, ChatGPT
Content Generation

Leadership Talking Points

Developing talking points for senior leadership—reducing creation time from "agonizing all day" to under 30 minutes with approximately 30% editing required.

🔧 Multiple platforms
Educational Support

ELD-Integrated Lesson Plans

Developing lesson plans with integrated ELD standards for migrant education grantees, breaking into unit plans with culturally sustaining pedagogy.

🔧 MagicSchool AI
Document Evaluation

Grant Reporting Scoring

Generating preliminary "met/did not meet" ratings for grantee reporting using rubrics—discovered AI was more lenient than human reviewers, prompting bias reflection.

🔧 Anthropic Claude
Communication Enhancement

Compliance Notification Clarity

Reviewing notification of findings language to ensure explanations and expectations are clear—"The more complex the finding, the greater the potential for improvement."

🔧 Multiple platforms
Communication Enhancement

Grantee Email Professional Tone

Ensuring email communications maintain clarity and professionalism—resulted in fewer questions from grantees, indicating greater understanding.

🔧 ChatGPT
Process Improvement

Desk Manual Modernization

Restructuring lengthy desk manuals into navigable formats—using AI-generated outlines to update legacy documentation.

🔧 Microsoft Copilot
Process Improvement

Task Checklists from Procedures

Converting comprehensive procedure documents into actionable checklists for easier onboarding and role transition.

🔧 ChatGPT

It consistently gave me kind of exactly what I needed... I eventually started using it on other documents.

— Co-researcher 4, on document summarization use case

Instead of agonizing over talking points all day long, I was able to extract good talking points and make them better in 15 to 30 minutes.

— Co-researcher 8, on content generation

07

Peer Influence & Advocacy

From help-seekers to help-givers: Co-researchers became knowledge champions

📚

Pre-Study State

Had formal AI training0%
Very familiar with AI0%
Somewhat familiar78%
🚀

Post-Study State

Found training very helpful89%
Very familiar with AI22%
Somewhat familiar78%
🌟

3-Month Follow-Up

Very/somewhat confident100%
Shared learnings with peers78%
Very/somewhat positive attitude89%

My colleagues and I often share how we have used AI to streamline our workflow. I appreciate how much more productive they have become as a result of these conversations and others willing to learn.

— Co-researcher, Follow-Up Survey

I have found that I have become a strong proponent of AI. I think that there is certainly a place for it in public agencies. I just need to help people understand what AI really is (i.e., a tool) and what it is not (i.e., a replacement for human thought).

— Co-researcher, Follow-Up Survey

08

Recommendations

For practice and future research based on study findings

🏛️ For Practice

  • Encourage all staff to take advantage of state-provided AI training resources that are available at no cost
  • Develop guidance on case studies and examples of responsible and allowable use with detailed, role-specific examples
  • Create interactive communities of practice or peer learning sessions lasting several weeks, not just single training events
  • Identify AI champions willing to act as facilitators and mentors on responsible AI use
  • Address privacy, ethical, and environmental concerns about generative AI directly through leadership discussions

🔬 For Future Research

  • Expand research to include staff from other areas such as curriculum, student achievement, and operational teams
  • Explore advanced generative AI use cases requiring more substantial investment (bespoke applications, chatbots, compliance tools)
  • Work with other state-level education agency departments to understand cross-organizational approaches to AI training
  • Apply additional theoretical frameworks beyond empowerment theory for different organizational contexts
  • Examine structural integration approaches for organizations with more unified workloads
Key Insight

Generative AI is a people story, not a technology story. The power to determine how AI will be helpful belongs in the hands of those closest to implementing the programs.

Deep Data Dive | Empowering State-Level ML Leaders | Dr. Geqigula M. Dlamini
Quantitative Survey Analysis

Deep Data Dive

Tracking the empowerment journey through survey data across Pre-Study, Post-Study, and 3-Month Follow-Up phases

The Empowerment Journey: Pre-Study → Post-Study → 3-Month Follow-Up
Phase 1
Pre-Study
March 2025
Phase 2
Post-Study
May 2025
Phase 3
Follow-Up
August 2025
11 Co-Researchers | 4 Surveys | 81% Response Rate | 22 Use Cases Explored
Attitude Toward Generative AI in the Workplace

Tracking sentiment shifts from skepticism to advocacy

📊 Pre-Study (n=9)

Very Positive22%
Somewhat Positive56%
Neutral11%
Somewhat Negative11%

📈 Post-Study (n=9)

Very Positive56%
Somewhat Positive44%
Neutral0%
Somewhat Negative0%

🌟 3-Month Follow-Up (n=9)

Very Positive33%
Somewhat Positive56%
Neutral0%
Somewhat Negative11%

Key Insight: 100% positive attitude post-study; 89% sustained at 3-month follow-up despite one researcher citing environmental concerns

AI Familiarity & Confidence Evolution

Familiarity with GenAI Tools

Very Familiar
Pre
0%
Post
22%
Somewhat Familiar
Pre
78%
Post
78%
Not Familiar
Pre
22%
Post
0%

Confidence in Using AI Tools

Very Confident
Pre
56%
Post
33%
Follow-Up
56%
Somewhat Confident
Pre
44%
Post
67%
Follow-Up
44%

Note: 100% confidence (very + somewhat) maintained across all three phases

Generative AI Usage Frequency

From occasional experimentation to daily workflow integration

Pre-Study
67%
Used AI frequently or occasionally
Frequently: 22%
Occasionally: 44%
Never (heard of): 33%
Post-Study
100%
Using AI more frequently
All 9 co-researchers
reported increased
AI usage
3-Month Follow-Up
78%
Using AI daily or weekly
Daily: 33%
Weekly: 44%
Occasionally/Rarely: 22%
Perceived Benefits of Generative AI

How perceptions of AI value shifted through hands-on experience

Benefit Pre-Study Post-Study Change
Increased efficiency & productivity 100% 78% ↓ 22%
Creative content generation 78% 89% ↑ 11%
Automating repetitive tasks 78% 44% ↓ 34%
Enhanced learning & skills development 78% 44% ↓ 34%
Improved decision-making & insights 44% N/A

Key Insight: Creative content generation emerged as the top benefit post-study (89%), while expectations for automation decreased after hands-on experience

Pre-Study Concerns About AI at Work

What worried co-researchers before they began exploring

Data privacy & security
78%
Ethical concerns (bias, fairness)
67%
Not enough training/support
56%
Job displacement/redundancy
33%
Lack of transparency
22%

Post-Study Concern Changes

44%
Reported fewer concerns
56%
No change in concerns
Barriers to AI Adoption: Before & After

How training and exploration addressed key challenges

Lack of Knowledge
100% 67%
↓ 33% reduction
Privacy & Ethics
78% 56%
↓ 22% reduction
Resistance to Change
67% 56%
↓ 11% reduction
Insufficient Training
67% 67%
No change
Budget Constraints
44% 44%
No change
Training & Prior Experience
0%
Had formal AI training before study
All 9 entered without prior formal training
89%
Found training "very helpful"
11% found it "somewhat helpful"
100%
Felt more prepared post-study
All reported increased preparedness
Impact on Workflow Effectiveness (3-Month Follow-Up)
33%
Greatly Improved
44%
Somewhat Improved
22%
No Change

77% reported improved workflow effectiveness and efficiency

Structural Empowerment: Knowledge Sharing & Advocacy
78%
Shared learnings with colleagues
7 of 9 became knowledge sharers
78%
Introduced new AI practices
From study into daily workflow
89%
View AI as more useful
In organizational work post-study
Support Needed for Continued AI Integration (3-Month Follow-Up)
Role-specific examples
44%
Peer learning spaces
44%
Safe/compliant use guidance
22%
Follow-up hands-on sessions
11%
AI office hours/coaching
11%

The Empowerment Journey: By the Numbers

0% → 89%
Training to
Empowerment
78% → 100%
Positive Attitude
Shift
67% → 78%
Regular
AI Users
78%
Became Peer
Advocates
77%
Improved
Effectiveness

Dr. Geqigula M. Dlamini | University of the Pacific, Benerd College

© Dr. Geqigula M. Dlamini. All Rights Reserved.

*A comprehensive search of ProQuest Dissertations & Theses, Google Scholar, Government Information Quarterly, Public Administration Review, and state government AI initiative databases conducted in January 2026 identified no comparable studies.

Full dissertation will be posted in early 2026.

Get in Touch