by Alicia McCraw, PhD, Director of Programming, Research & Media
AI is everywhere — drafting reports, generating presentations, and even assisting in policy and legal analysis. But as its footprint grows, so do the risks of trusting it without human oversight.
In just the past few months, we’ve seen headline-grabbing examples of AI’s fallibility. Deloitte is under fire after a federal report was found to include a fictitious quote likely fabricated by AI. A highly publicized health policy report — Make America Healthy Again (MAHA) — was released with references to non-existent studies and URLs that went nowhere. And a Stanford HAI study revealed that even specialized legal AI tools provided inaccurate or fabricated answers in nearly one out of six test cases.
Each of these stories shares a common thread: AI output that appeared authoritative but collapsed under scrutiny.
Our 2025–2026 Research: AI as a Teammate, Not the Lead
Against this backdrop, we’re excited to share the public side of our LEAD3R 2025–2026 research plan, a year-round exploration of teamwork, leadership, and the evolving workforce.
Yes, we will use AI in this research. But unlike the examples above, we won’t let AI steer the process. Our approach is grounded in human-in-the-loop methodology:
- AI will be a writing and editing assistant—helping us refine clarity and style.
- AI will not source or verify data.
- Every insight, trend, and data point will be personally gathered and analyzed by me, from top-tier academic journals, trusted industry leaders, and primary sources.
This balance, human expertise leading, AI assisting, is how we guarantee that our work remains rigorous, credible, and useful.
Why This Matters
The Deloitte, MAHA, and Stanford examples aren’t isolated. They’re symptoms of a larger issue: over-reliance on unverified AI outputs. When organizations skip the critical thinking step, errors creep in — and in contexts like policy, law, or organizational research, those errors can have real-world consequences.
At LEAD3R, we believe responsible innovation is the path forward. By pairing human judgment with AI’s efficiency, we can explore trends and insights more dynamically — while keeping integrity front and center.
Looking Ahead
Throughout the next year, we’ll share insights on leadership, teaming, and the workforce—all designed to help organizations improve how they collaborate and grow. And as we move forward, we commit to transparency: we’ll always be clear about when and how AI is part of our process, and we’ll never let it replace the human expertise that drives our work.
Because at the end of the day, AI should be a partner — not the leader.