February’s TechSoup Connect Canada event “How to Build AI Tools That Save Time and Money” was EPIC! We had an insightful discussion with experts Angeline Robertson and Matthew Lichty from Stand.earth Research Group, exploring how nonprofits can use AI and machine learning for advocacy research.
Here are the top 10 takeaways from the session, with examples from the event transcript to help you integrate AI into your research workflows:
1. Have an AI Data Policy
- Establish clear guidelines on what data can and cannot be used in AI-assisted research.
- “We have an AI policy at Stand.earth so that we do not use any confidential, personal, or sensitive information. We’re really looking at publicly available information online for most of our data structuring work.” – Angeline Robertson
2. Start with Something You Know Well but Want to Do Faster
- Identify a repetitive, time-consuming task AI can streamline.
- “We weren’t looking for AI tools to reinvent the wheel. We just wanted to integrate AI into our methodologies in ways that let us get more done, faster.” – Angeline Robertson
3. Hire a Data Coach and Focus on Training Your Team
- Instead of outsourcing, train your team to integrate AI into research workflows.
- “Instead of hiring a firm to build a tool, we hired a data science coach to teach us how to do it ourselves. This meant we could keep learning and improving our methods rather than relying on outside consultants.” – Angeline Robertson
4. Start with Data Structuring and Use Information Available Online
- AI excels at organizing and categorizing unstructured data from PDFs, reports, and websites.
- “One of the biggest gains for us has been unlocking unstructured data stuck in PDFs—things we used to have to comb through manually. AI helps us turn that mess into structured data we can actually use.” – Matthew Lichty
5. Harness Your Domain Knowledge
- AI can process large datasets, but human expertise is essential for quality control.
- “We didn’t just plug in AI and hope for the best. We used years of experience analyzing bank policies to train AI on what actually matters. The human touch makes all the difference.” – Matthew Lichty
6. Use More Than One LLM
- Diversifying models reduces bias and enhances accuracy.
- “We’ve started using Perplexity, which lets us test different AI models side by side. This helps us spot inconsistencies and reduce the risk of relying too much on one tool.” – Matthew Lichty
7. Remember: Good Prompt Engineering Requires Good Writing
- The way you phrase prompts impacts AI output quality.
- “Writing clear, specific prompts is half the battle. If you ask AI a vague question, you’ll get a vague answer. We spend a lot of time refining our prompts to get the best results.” – Angeline Robertson
8. LLMs Have Short Attention Spans—Break It Down for Them
- Complex tasks should be split into smaller sub-tasks for better accuracy.
- “AI models don’t do well with long, complicated tasks. We had to break things into smaller steps, like focusing on one policy classification at a time, to get better results.” – Matthew Lichty
9. Use Coded Approaches to Analyze at Scale
- GUI-based tools (e.g., ChatGPT, Perplexity) are good for small tasks.
- “We started with the ChatGPT interface, but once we needed to analyze hundreds of documents, we wrote Python scripts to automate the process. That shift saved us an enormous amount of time.” – Matthew Lichty
10. Keep the Human in the Loop!
- AI is a tool—not a replacement for human judgment.
- “We’re not trying to replace researchers. AI helps us clear out the grunt work so we can spend more time thinking critically and making decisions.” – Angeline Robertson
Final Thoughts
The session showed that AI isn’t about replacing researchers—it’s about freeing up time for meaningful work. By using AI for structuring, analyzing, and summarizing data, advocacy organizations can dig deeper into their research and push for change more effectively.
I hope to see you at our next event!