Research Administration

Exploring AI Ethics: Enhancing Human Judgment, Not Replacing It

Today, I’m giving a panel talk to NIH TL1 and T32 trainees on Exploring AI Ethics — and as I was preparing my remarks, I realized how deeply this topic extends into the daily life of research administrators too.

AI isn’t just transforming how we write, teach, and manage research — it’s reshaping how we think about integrity, authorship, and stewardship in the systems that support discovery.

So, I wanted to expand on what I’m sharing with trainees today, especially for those of us who sit at the crossroads of science and administration — those who translate policy into practice, and ideas into infrastructure.

Artificial intelligence isn’t coming; it’s already here. From drafting grant aims to summarizing evaluation reports, managing budgets, or even helping us organize our inboxes, AI has become part of our administrative bloodstream.

But as research administrators, we’re not just users of these tools — we’re guardians of research integrity. And that means how we use AI matters as much as whether we use it.


A Pragmatic Optimist’s View

I like to think of myself as a pragmatic optimist about AI — someone who believes that these tools can enhance human judgment, not replace it, if we use them transparently and thoughtfully.

The challenge isn’t whether AI can make us faster or more efficient — it’s whether we can use it responsibly, in ways that strengthen trust and accountability across our institutions.

AI can make our workflows smoother, our communication sharper, and our reports more polished. But ethics is what makes our work credible.


When the Robot Meets the Reviewer: AI in Grantwriting

Let’s be honest — most of us have already asked ChatGPT to “make this sound a little more NIH-friendly.”

That’s okay. In fact, it’s smart — AI is brilliant at refinement, not reinvention.
It can clarify complex sentences, reformat sections to align with review criteria, or help you find plain language for technical concepts.

The ethical line appears when AI starts creating your science rather than supporting its communication — when it begins drafting hypotheses, methods, or references you didn’t author.

AI should help you sound like the best version of yourself, not someone else entirely.

If you use it, disclose it. A simple note does the job:

“Portions of this proposal were refined for clarity using ChatGPT (OpenAI, October 2025 version). All scientific content and references were authored and verified by the research team.”

That one sentence communicates transparency, boundaries, and credibility — the trifecta of ethical authorship.


Program Management and the Ethics of Efficiency

Program management is one of the most valuable — and most ethically sensitive — applications for AI.

It can summarize meeting notes, analyze anonymized survey data, and even draft early versions of annual reports. But data stewardship must come first.

The rule I give my team is simple:

“If you wouldn’t email it to a stranger, don’t paste it into a public AI tool.”

AI doesn’t understand context, confidentiality, or institutional nuance — you do.
So, keep personal data, financial information, and unpublished materials in secure, internal systems only.

AI can streamline your reporting — but ethical oversight is what preserves trust.
Efficiency without integrity is just speed with better formatting.


The Great Manuscript Question: To Cite or Not to Cite?

I get this question constantly as an editor:
“Do I have to cite the AI tool I used?”

Here’s the short answer: disclose, don’t delegate.
AI can help you edit or structure text, but it should never generate ideas, interpret data, or create citations.

Use AI as an editor, not an author.
And when you disclose its use, do it transparently:

“Portions of this manuscript were refined for clarity using ChatGPT (OpenAI, October 2025 version). All text was reviewed and verified by the authors.”

If a journal requests a formal citation, you can treat it like software:

OpenAI. (2025). ChatGPT (October 2025 version) [Large language model]. https://chat.openai.com/

Ethical use of AI in publishing isn’t about compliance — it’s about reproducibility.
If readers don’t know which model, when, or how you used it, they can’t replicate your process.
Disclosure isn’t bureaucracy. It’s transparency in action.


The Ethics of Prompting: Writing with Intention

Prompting is where ethics and effectiveness meet.
A vague prompt invites vague ethics. A clear, contextual prompt invites precision and accountability.

Try this:

“You are editing for an NIH reviewer audience. Rewrite this paragraph for clarity and flow, preserving meaning and all numerical values. Do not create or modify citations.”

That’s clarity, constraint, and control — hallmarks of ethical prompting.
Good prompting doesn’t just get better results; it makes your thinking visible.


Job Searching and Authenticity in the Age of AI

Here’s one I love talking about with trainees: AI can help job seekers summarize CVs, tailor cover letters, or practice interview questions. But authenticity is non-negotiable.

I tell my scholars:

“Use AI as a coach, not a ghostwriter.”

It can help you refine your message, but it can’t — and shouldn’t — impersonate your passion for your science. The risk isn’t deceit — it’s sameness. Its uniformity. Overuse of AI can flatten individuality, and your authenticity is your advantage.


The Responsibility of Stewardship

Research administrators have a special role in shaping ethical AI culture.
We’re both users and architects — we design systems, policies, and expectations that define how others engage with these tools.

Ethical AI isn’t a fixed checklist. It’s a community practice built through transparency, disclosure, and critical questioning.

Every time you model that in your work — by citing AI use, checking outputs, or mentoring someone on responsible practice — you’re shaping the future of research integrity.

If you’re ready to deepen your understanding of AI in academia, I highly recommend the AAMC AI Skill Building for Medical Educators Series. It’s one of the most pragmatic and thoughtful approaches to AI fluency I’ve seen.


My True North Takeaway

AI should never replace your thinking — it should make your curiosity, collaboration, and joy of research more visible.

Use it to sharpen your writing, streamline your reporting, and illuminate your insights.
But never use it to bypass the reflective work that defines you as a research professional.

Because at the end of the day, AI doesn’t make research ethical — we do.

Article publié pour la première fois le 23/10/2025

Further Reading

Discover more from iDoGrants by Holly Zink

Subscribe now to keep reading and get access to the full archive.

Continue reading