Skip to main content
All posts

Product

Introducing TryCaSIE: the documentation tool for everyone else in human services

Most AI documentation tools on the market today were built for licensed clinicians in therapy or medical settings. Eleos, Mentalyc, ICANotes, SimplePractice, and TherapyNotes all assume a 50-minute session with a billable code attached and a progress note at the end. That covers one slice of case documentation. It leaves out most of the field.

Who gets left out of "AI for notes"?

Case managers writing service coordination notes. Family services coordinators documenting child welfare check-ins. Housing navigators writing intake assessments at 10 PM after a drop-in. Shelter staff writing shift summaries. Street outreach workers logging encounters in parking lots. Probation officers writing compliance notes between appointments. Eligibility workers, re-entry specialists, youth program coordinators, community health workers.

These workers write as many notes per week as a licensed therapist, sometimes more. Their notes get audited by the same funders and subpoenaed in the same court cases, but the tooling has passed them by.

Why doesn't a generic AI tool work here?

A worker could, in theory, paste rough notes into ChatGPT and ask it to format them. In practice, that fails on four fronts.

No BAA on consumer LLMs. A case note is protected health information the moment it mentions a client. Consumer ChatGPT, consumer Claude, and consumer Gemini do not sign Business Associate Agreements, so pasting PHI into them is a HIPAA violation. Workers know this. Most of them refuse to try.

No understanding of format variety. Human services uses at least eleven distinct note formats that we know of: SOAP, BIRP, GIRP, SIRP, DAP, DAOP, BPS, PCAP, FIRP, FIRPP, and a plain standard narrative. The right one depends on the discipline, the funder, and sometimes the state. A generic tool will confidently produce something that looks like a note and fails audit because it used "Objective" when the reviewer expected "Observation."

No speaker separation. A lot of human services documentation comes from multi-party interactions like intake interviews, family meetings, housing assessments, and case conferences, where two, three, or four people are talking. Generic transcription collapses them into a single stream. TryCaSIE uses Azure Speech diarization for up to four speakers, so "Client reported..." stays distinct from "Worker observed..." and "Collateral contact stated..."

No audit trail. A payer auditor, OCR investigator, or licensure board wants more than the note itself. They want the provenance: what the original input was, when it was edited, whether it was AI-generated, who finalized it. Without that record, the note is evidence of nothing.

Format breadth is the capability, not a feature list

The easiest way to see what TryCaSIE is designed for is to look at three different workers on the same Tuesday afternoon.

A housing navigator just finished a drop-in visit with a client and needs to document a conversation that ranged across eviction paperwork, benefits recertification, and a safety concern about the client's roommate. They dictate a five-minute recording and choose Standard. TryCaSIE produces a chronological narrative with clear attribution between what the client reported and what the worker observed.

A school counselor sees a student for a weekly check-in. Their district wants GIRP format — Goals, Intervention, Response, Plan — because that is what the state education department reviews. They type three bullet points and choose GIRP. The note comes back with each section populated and phrased for an education-setting audience.

A shelter intake coordinator just finished a 40-minute assessment with a new resident. Their agency wants DAP format so it maps cleanly to their HMIS export. They record the whole assessment and choose DAP with comprehensive detail level. The note comes back long, thorough, and ready to paste into the case file.

Same tool, three different outputs, because each professional context has its own format requirements. Users can also define custom formats with up to eight sections each. The format is snapshotted into the note's metadata at generation time, so old notes survive even if the format definition is later deleted.

How much help do you actually want?

A common worry in social work forums runs like this: "I don't want an AI laundering my voice out of my notes. My notes sound like me for a reason." It's a fair concern. Case notes are, in part, evidence of the worker's professional judgment, and if the language reads like a generic LLM wrote it, that evidence is weaker.

TryCaSIE exposes four levels of transformation. Light preserves the worker's word choices and just cleans up grammar and structure. Moderate balances cleanup with professional structure while keeping the author's voice recognizable. Polished fully transforms the input into formal documentation. Custom lets the worker supply their own transformation prompt. The worker decides how much help to accept.

The human-in-the-loop contract

A case note is a legal document, and an AI-drafted note can't be final until a qualified human has reviewed it. TryCaSIE treats that as a hard design constraint rather than a disclaimer in the footer.

Every AI-structured note opens as a draft. Copy-to-clipboard is locked until the worker explicitly finalizes. Every change (original input, structured edit, title update, client assignment, finalize, reopen, copy) is recorded in an audit trail with twelve distinct action types and six-year retention. If a note ever needs to be defended in an audit or a hearing, the record of how it came to be is already there.

What's next

Format breadth was the first problem to solve. The next one is state-specific compliance: a Virginia BPS, a Texas BPS, and a California BPS all have different required elements, and a note that's "correct" in one jurisdiction can fail audit in another. More on that soon.