Every time I walk into a room, the energy is the same: curiosity, willingness and a stack of questions nobody feels entirely comfortable asking out loud.
“Is this going to cost a fortune?” Am I already behind?”
These aren’t signs of resistance. They’re signs that people in our industry ask hard questions because they’re protecting their staff, their budgets and their communities. So, let’s answer the tough questions.
Is AI going to cost us a fortune?
Nope. Most major tools, including ChatGPT, Gemini and Claude offer low-cost or nonprofit tiers with the security a business needs. No procurement process required. When you’re ready to expand, paid plans run $8 to $25 per person per month, which is less than most software your team already pays for, and the return shows up fast.
The real cost isn’t the tool. It’s the time your team spends doing work AI could handle while the humans in the building wait for their attention.
What’s the difference between free and paid? When should we upgrade?
Free tiers cover the basics: chat, content drafts and Q&A, which is more than enough to get comfortable. Paid versions unlock better models, document uploads, longer memory and custom instructions that let you train the tool on your organization’s voice.
My rule: let your team play. The moment someone says, “I keep hitting the limit” or “I wish it remembered what I told it last time,” that’s your signal. Upgrade the people using it and let their results make the case for everyone else.
How do I get my team on board when some of them want nothing to do with it?
This is the question behind every AI question, and it has almost nothing to do with technology. Staff in our industry are already stretched impossibly thin. When AI appears without context, it doesn’t feel like relief. It feels like one more thing on a plate that was already full three things ago.
The fix is changing management, and it doesn’t have to be complicated. Start with the why. Don’t say, “this helps the organization” but “this helps you and here’s how.” Make space for real feelings. Celebrate the early wins loudly, because when one person cuts a two-hour task to twenty minutes, that story becomes permission for everyone else to try. You don’t need unanimous buy-in. You need a few people willing to go first.
Should we be worried about AI using our members’ data?
Most major platforms offer settings or plans that don’t use your inputs to train their models, but a responsible approach starts with an internal policy that defines what’s okay to put in and what isn’t. Member names and personal health details? Out. Anonymized survey feedback or class descriptions? Fair game.
Here’s the part nobody wants to talk about: your staff are already using these tools, some openly, some quietly. A clear policy doesn’t just protect your members. It gives your team boundaries they can trust instead of guessing in the dark.
Is AI bad for the environment?
AI does consume energy, and that’s worth acknowledging honestly. But major providers are investing heavily in renewables and efficiency, and the footprint of using a tool daily is a fraction of what went into building the model itself. For a rec center, the responsible approach is to use it intentionally rather than reflexively. That’s good stewardship of resources, digital and otherwise.
We tried it and everything came out generic. What went wrong?
Nothing went wrong with the tool. Everything went wrong with the ask.
AI doesn’t know about your decades-long Friday water day tradition or the teen program born from a conversation with a kid who had nowhere else to go. You must bring that context to the table.
The difference between generic output and something that sounds like you is specificity. Tell it who you are, who you’re talking to and what matters. Paste in previous work and ask it to match the tone. The more you put in, the more you get back. That’s not a workaround. That’s how the tool was designed to work.
Our board wants an AI strategy. Where do we even start?
Start where you’d start with any strategy: with the problem you’re trying to solve.
Boards love AI in the abstract, but what they need is a grounded story about where the bottlenecks live and where staff time is being eaten alive. Name those problems specifically, then explain how AI could relieve the pressure.
Is it okay to use AI for grant writing?
Most funders don’t care how you drafted the narrative. They care that it’s accurate, compelling and grounded in your actual work. AI is excellent at the structural heavy lifting: organizing requirements, translating raw notes into polished language and maintaining consistency across long documents. You bring the story, the data and the hard decisions. The tool gets it on paper faster and more clearly. That’s not cutting corners. That’s working smarter on behalf of the mission you’re funded to serve.
One caveat that I will never stop repeating: always review AI-generated content for accuracy. These tools can sound wildly confident while being completely wrong. Your institutional knowledge is the final filter, every single time.
How do we keep from sounding like everyone else?
This fear is well-placed, and it’s one I take seriously in my own work. AI’s default voice is competent, safe and utterly forgettable. If you accept the first draft as the final draft, your voice will start to flatten, and your members will notice before they can even name what feels different.
The solution isn’t avoiding AI. It’s refusing to let it have the last word. Use it to get past the blank page, then spend your energy on the parts that make it yours: the stories, the humor, the quirks and the warmth that no algorithm can replicate.
My personal rule: if I wouldn’t say it out loud to a member, I don’t send it. I don’t care how polished it sounds.
What’s one thing we can do this week?
Pick one repetitive task that doesn’t require creative genius, just time.
Open an AI tool and give it real context — who you are, what the task is, what the tone should sound like and any examples of previous work. Let it take the first pass, and don’t judge it by the first output. Judge it by the third, after you’ve refined your ask and shaped it closer to what you need.
Then notice how you spent the time you got back. That’s the real measure. Not whether the tool wrote a perfect email, but whether you were able to walk out to the gym floor, follow up with a family, or finally tackle the project that’s been collecting dust on your desk for three weeks. That’s not efficiency for efficiency’s sake. That’s technology in service of the work we’re here to do.
The questions won’t stop, and they shouldn’t. Every good question your team asks about AI is proof they’re thinking carefully about what this tech means for the people they serve. But at some point, those questions must lead somewhere: a first step, a small experiment, a few minutes returned to the work that actually matters.
Our industry has never been about moving fast. It’s been about moving with purpose, and that’s exactly the energy needed from us in this moment.







