Governed AI retrieval, not a chatbot

Your knowledge base, answering with citations.

Plug in Solr, your APIs, your websites, or upload files. IntelloWork answers user questions grounded in your sources — with citations, ACLs, and the audit trail your compliance team needs. Web widget, WhatsApp, Slack, Teams. Same answer, every channel.

Works onWeb widgetWhatsAppSlackMicrosoft Teams

0.0s

median answer latency

0.0M+

documents indexed

0.0%

retrieval-plane uptime

Live · Web + WhatsApp
Cited · 4 sources
intellowork.com/lab

In /admin/auth-providers, add an IdP and map your group attribute to a role. IntelloWork enforces the mapping on every login.

SSO setup guide·§2.1Roles & permissions·§4IdP onboarding·step 3
Ask anything…Send
Why IntelloWork

Built for content owners. Not for ML researchers.

You bring the docs. We handle the chunking, embeddings, retrieval policy, citations, and the seven other things that decide whether your bot is trustworthy.

Citations, not hallucinations

Every answer is grounded in retrieved chunks. We cite the source document, the section, and the paragraph — never an opaque hand-wave.

ACLs that survive retrieval

Source-level groups travel into the index, so a guest never sees a confidential page even if their question phrases it perfectly.

Channels included

Web widget, WhatsApp, Slack — same retrieval, same governance. Configure once and route messages by phone number, channel, or pipeline.

Live demo

Click a question. Watch your bot answer.

Three real IntelloWork conversations — answers grounded in the source documents, every claim cited. No setup required, no fake content. This is what your users will see.

intellowork.com/your-workspace
 
Demo answers are scripted. Your live bot streams from your indexed sources.Try with your own content
How it works

Three planes. One workspace. Zero glue code.

IntelloWork separates content, retrieval, and surface so each can evolve independently. The dashboard shows you exactly which plane is healthy at any given moment.

01 · Content plane

Connect your content

confidencecitationsrerank

01 · Content plane

Connect your content

Solr indexes, generic JSON APIs, websites, file uploads. Each connector self-describes; we sample your schema, suggest field maps, and warn you about the fields that look like ACLs.

  • Auto-detect content type, title, URL, body fields
  • Field-level no-summary rules for compliance docs
  • Per-source ACL on `groupname_ss` (or your equivalent)

02 · Retrieval plane

Compose retrieval pipelines

confidencecitationsrerank

02 · Retrieval plane

Compose retrieval pipelines

Vector + keyword hybrid by default. Tune top-k, rerank, and confidence thresholds in the Configurations editor — or use the workspace defaults that ship from day one.

  • Vector + lexical hybrid out of the box
  • Confidence-aware: low-confidence answers carry a disclaimer
  • Named prompts let you reuse a prompt across pipelines

03 · Surface plane

Ship to every channel

confidencecitationsrerank

03 · Surface plane

Ship to every channel

One pipeline can power a website widget, a WhatsApp number, a Slack workspace — all governed by the same retrieval policy. Per-channel branding, per-channel feature flags.

  • Embed snippet for any website, mobile-ready
  • WhatsApp Cloud API with signed-webhook delivery
  • Audit log on every conversation, every channel
Who IntelloWork is for

Three teams. One platform. Same governance.

IntelloWork lands fastest where there's already a lot of content and a lot of repeated questions. If that's your team, it'll feel obvious in week one.

Customer Support

Teams running Zendesk, Intercom, or in-house ticketing.

  • Self-serve refunds & cancellations
  • Deflect L1 tickets with citations
  • WhatsApp + web widget, same answers

Cuts ticket volume 40-60% on common queries.

Product Documentation

Doc teams maintaining a public knowledge base or developer hub.

  • Conversational doc search
  • Cite the section, not just the page
  • Embed on every docs page in 5 minutes

Search-to-answer in under 3 seconds, with sources.

Internal Helpdesk

IT, HR, and ops teams handling employee questions.

  • Slack / Teams native answers
  • Role-aware replies (IT sees IT)
  • ACL-respecting; HR docs stay HR-only

L1 questions resolved before they hit a person.

Channels

One pipeline. Every channel your users live in.

Configure retrieval once. Route messages by channel, phone number, or workspace. Each surface gets the same governance — ACLs, audit log, citation policy — so a bot in one place can't leak content from another.

Web Widget

Live

Embed-script chat for any site. White-label branding, ACLs, citation chips.

WhatsApp

Live

Meta Cloud API. Verified webhooks, voice-note transcription, multi-number routing.

Slack

Beta

Bot user with thread-aware replies. ACL-respecting; honors workspace roles.

Microsoft Teams

Coming soon

Native Teams app with adaptive cards. SSO via the existing tenant IdP.

Voice

Beta

Speech-in / speech-out for the widget. Whisper transcription, low-latency streaming.

API

Live

Direct chat API for custom surfaces — mobile apps, in-product help, agents.

Built for enterprise

The boring safety stuff, done right.

We'd rather your security team approve us in week one than have you discover the compliance gap in week six. Here's what's already in the box.

SOC 2-aligned

Tenant isolation, encryption at rest + in transit, change management.

GDPR-ready

Data residency, right-to-erasure, retention policy, DPA on request.

Bring your own LLM

OpenAI, Bedrock, Anthropic, or self-hosted Whisper. Your keys, your providers.

Source-level ACLs

Document permissions travel into the index. Guests can't read staff content.

Audit log on every action

Every config change, every reset, every conversation. Queryable, exportable.

SSO & RBAC

Keycloak, Google, Azure AD, Okta, generic OIDC. Group-to-role mapping built-in.

Trusted by content & support teams at

Contiem
Acme Co
North Bay
Helix Labs
Spire
Atlas
Customers

What teams say after a month with IntelloWork.

We replaced two L1 support shifts with IntelloWork in the first month. The citation chips are what bought it for the team — every answer is auditable, no one is hand-waving.

Priya R.

Head of Customer Operations · Contiem

Setup took an afternoon. Our docs index, a few mapping rules, and the WhatsApp number we already had. Day one we were answering with citations.

Jordan F.

Engineering Manager · Helix Labs

The ACL story is the part our security team cared about. Source-level groups travel into retrieval — guests can't see internal docs even if they ask the right question.

Maya K.

InfoSec Lead · Spire

We compared 4 RAG vendors. IntelloWork was the only one that admitted when it didn't know — the others would hallucinate a confident answer with no source.

Daniel S.

Product Lead · Atlas

FAQ

Questions buyers actually ask.

We left out the marketing-y ones. If something below isn't covered, email us — we'll answer same day during business hours.

How is pricing structured?
Per-workspace, with usage-based add-ons for high-volume tenants. We invoice monthly. Custom enterprise contracts available — talk to us about volume + multi-year terms.
Where does my data live? Who can read it?
Your indexed content stays in the workspace you control. Embeddings live in your tenant's vector partition. IntelloWork staff never read tenant content; the only access path is admin tooling that's audit-logged. Data residency: ap-south-1 by default; EU/US options on request.
What's the answer latency in production?
Median is 2.1–2.8s end-to-end (retrieval + LLM). 95th percentile under 5s. The chat is streamed, so the first token typically lands in 800ms — the bot feels live before the full answer arrives.
Can I use my own LLM provider?
Yes. OpenAI, Anthropic, AWS Bedrock, Azure OpenAI, or self-hosted models that speak the OpenAI API. Bring your own keys; we never see them after configuration. You can mix providers per pipeline.
How many channels can a single bot serve?
Unlimited on enterprise. Starter and team plans cap at 2 and 5 channels respectively. A 'channel' is one Web widget, one WhatsApp number, one Slack workspace, etc. Same pipeline can answer across all of them.
What happens if the bot gets a question wrong?
/conversations surfaces every Q&A with citations and confidence. The 'Debug in Lab' link replays the failing query with full retrieval trace so you can see whether retrieval, the prompt, or the model misfired. Most fixes are content-side: better mappings, a missing source, an out-of-date doc.

Ready to ship a bot your team actually trusts?

Access is invite-only while we onboard customers carefully. Tell us about your use case and we'll get you into a workspace within a day.