July 2025 Meetups
July 2025
July 17: Coffee Hour - Preparing Content for AI
Early Observations and Use Cases
💬 Kalyn Howard (Westerra Credit Union)
-
Shared that GenSearch has been live internally since April.
-
Contributors now revise inaccurate articles spotted by AI completions.
-
AI struggles with interpreting complex tables in financial content.
-
Kalyn uses a monthly Q&A session to refresh contributor knowledge and best practices.
-
Emphasized time constraints as the greatest challenge for AI-related content upkeep.
💬 Rod West (Aria Systems)
-
Highlighted hallucination issues in API documentation (e.g., AI misnaming functions).
-
Resolved by rewriting descriptions to include exact API names.
-
Plans to use GenSearch as a quality check tool post-documentation updates.
-
Suggested testing prompt-writing skills in future hiring assessments.
💬 Marc Noble (Waters Corporation)
-
Observed GenAI prefers long-form narrative over traditional KCS bullets.
-
Reported difficulty measuring content usefulness without KCS metrics.
-
Discovered certain articles are disproportionately cited by GenAI, even when irrelevant.
-
Currently in the fact-finding stage to identify formatting standards for AI success.
Templates, Formatting, and Chunking
💬 Gray Shekkola (Thermo Fisher Scientific)
-
Implemented template changes to display page summaries in all articles.
-
Encouraged contributors to include summaries using training site guidance.
💬 Adam Allen (Netsmart)
-
Shared excitement about launching GenSearch sitewide.
-
Asked about chunking mechanics — how headers and summaries influence what GenAI selects.
-
Later clarified that reused sections from “No LLM Index” articles can still be indexed if embedded elsewhere.
Demonstration: Using the AI Editor
Daniel Dalessio showcased how the AI Editor Tool:
-
Converts complex tables into readable articles using structured prompts.
-
Summarizes articles for faster indexing and improved comprehension.
-
Helps content creators test and refine GenAI-ready formatting with minimal effort.
Final Tips & Wrap-Up
Kalyn Howard
-
Reiterated the value of "No LLM Index" tags for excluding non-relevant content (e.g., release notes).
-
Warned against vague hyperlinks like "click here" — AI won't understand or surface these properly.
Adam Allen
-
Backed up Kalyn’s point with a general accessibility article discouraging ambiguous hyperlinking.
Daniel Dalessio
-
Closed by emphasizing iterative testing, time investment, and the evolving nature of best practices.

July 24: Creating Prompt and Personas in GenSearch
Prompt Engineering in GenSearch
- Cody Sackett (Expert) led the session, guiding participants through:
- What prompt engineering is and why it matters
- Components of a strong GenSearch prompt: role, context, instructions, input data (kernels), and output indicators
- The importance of clarity, precision, and negative prompting to improve AI output
Customer Insights and Use Cases
- Chris Blad (ETC) shared that their team:
- Makes one change at a time and waits 1–2 weeks to evaluate outcomes
- Relies on tweaking kernel counts and threshold settings to improve answer accuracy from highly technical documents
- Noted that some answers only live in a single article, requiring precision in retrieval
Interactive Q&A and Feature Requests
- Jessica Betterly (Sylogist) asked about a prompt library for newer users—Sharon encouraged the community to post examples in the online forum.
- Kalyn Howard (Westerra Credit Union) asked about revision history for persona settings. This sparked a round of feedback from:
- Jessica Betterly, Frank Tagader (Sylogist), Adam Allen, and Kalyn Howard
They requested:- Version control for prompts
- Ability to comment on persona edits
- A more seamless way to tie changes to completion reports
- Easier export and user mapping of GenSearch queries
- Jessica Betterly, Frank Tagader (Sylogist), Adam Allen, and Kalyn Howard


