A Complete AI Chatbot & Knowledge Base Platform

AI Assistant is a chatbot management platform built into the Noova admin. Each chatbot is provisioned as a Gradient agent endpoint with a selected LLM model, configurable parameters (maxTokens, temperature, topP, topK), and optional knowledge base attachments. Chat sessions support both streaming and non-streaming modes. The billing system tracks token usage per message with separate input/output pricing, charges wallets automatically, and supports grant-based quota distribution across teams. All access is controlled through 20 granular permissions covering chatbot and knowledge base operations.

Multi-Provider Model Selection

Select from 4 providers: OpenAI (GPT-4o, GPT-5, o1, o3), Anthropic (Claude Opus 4, Sonnet 4, Claude 3.5/3.7), DigitalOcean Gradient (Llama 3.3 70B, DeepSeek R1, Mistral Nemo), with model-specific pricing.

Document Embedding Pipeline

Knowledge bases use 3 embedding models (GTE Large EN v1.5, All MiniLM L6 v2, Multi QA MPNet Base) with async indexing jobs. Upload documents from DigitalOcean Spaces, track indexing status, and manage data sources.

Token Economy

VND-denominated wallets with per-1K token pricing for input and output separately. Usage logs track costs per chatbot and per user. Minimum balance floor (default 50 VND) prevents over-spending.

20 Granular Permissions

Chatbot permissions (View All/Shared/Own, Create, Edit, Delete, Use, Train, Analytics) and Knowledge Base permissions (View, Create, Edit, Delete, Upload, Analytics) - all role-based.

A Complete AI Chatbot & Knowledge Base Platform

What AI Assistant Enables

Contextual Employee Support

Deploy chatbots grounded in your organization's knowledge base - company policies, HR procedures, training materials, and FAQs. Employees get answers sourced from your actual documents.

Course-Embedded AI Help

Attach chatbots to specific courses with course-scoped visibility. Learners get AI assistance within their learning context, with the chatbot trained on course-relevant knowledge bases.

Controlled AI Spending

Manage AI costs with token-based billing, wallet balance tracking, and grant quotas. Distribute budgets across teams with Equal Split or Shared Pool modes and per-member caps.

Model Flexibility

Choose the right model per chatbot - use GPT-4o for complex reasoning, Claude for nuanced responses, Llama for cost efficiency, or DeepSeek for specialized tasks. Switch models without rebuilding the chatbot.

Who Uses AI Assistant

System Administrators

Create and configure chatbots, manage knowledge bases, set up billing wallets, configure grant quotas, and control permissions. Monitor platform-wide usage and costs.

Training Managers

Deploy course-scoped chatbots that assist learners with course content. Attach relevant knowledge bases to provide contextual help within specific training programs.

HR Teams

Build chatbots trained on HR policies, benefits documentation, and procedures. Provide employees with instant answers to common HR questions from authenticated knowledge bases.

Finance & Operations

Monitor AI spending through wallet dashboards, set budget limits with grant quotas, track per-chatbot costs, and optimize model selection for cost efficiency.

How Organizations Use AI Assistant

HR Policy Knowledge Bot

An organization wanted employees to get instant answers about company policies, leave procedures, and benefits - without waiting for HR staff or searching through document folders.

  • Created knowledge base with HR handbook, leave policy, benefits guide, and onboarding documents uploaded from DigitalOcean Spaces
  • Deployed chatbot with Claude Sonnet 4 model for nuanced policy interpretation
  • Set scope to 'public' so all employees can access from the platform
  • Configured widget with company brand color and welcome message explaining available topics
  • Token usage tracked per department through grant quotas with Shared Pool Per Cap mode

Course-Specific Learning Assistant

A training team needed AI assistants embedded in specific courses to help learners with questions about course materials, without exposing unrelated content.

  • Created separate knowledge bases per course - each containing that course's lessons, reference materials, and FAQs
  • Deployed chatbots with scope 'course' and linked to specific courseIds
  • Selected GPT-4o mini for faster response times and lower per-token costs for high-volume student queries
  • Enabled streaming chat so learners see responses generated in real-time
  • Set display rules with priority ordering so course-specific bots appear before general ones

Budget-Controlled Department AI

An organization with multiple departments needed each department to have its own AI chatbot budget, preventing any single department from consuming disproportionate resources.

  • Created wallet with organization-wide token balance funded through allocation orders
  • Set up EQUAL_SPLIT grants distributing monthly token budgets equally across 5 department heads
  • Each department head creates chatbots within their allocation - wallet charges tracked per user
  • Finance monitors spending through usage logs showing cost breakdown by chatbot, model, and user
  • Minimum balance floor prevents overspending - chat blocked when wallet drops below 50 VND threshold

Building an AI Chatbot in Four Steps

1

Create Chatbot and Select Model

Use the Chatbot Create Wizard to set up a new chatbot. Choose from 20+ LLM models across 4 providers: OpenAI (GPT-4o, GPT-5, o1, o3), Anthropic (Claude Opus 4, Sonnet 4, Haiku), Gradient (Llama 3.3, DeepSeek R1, Mistral Nemo). Configure model parameters: maxTokens (128-16,384), temperature (0-1), topP (0-1), and topK (0-10). Set chatbot type (system or user), scope (public, enrolled, or course), and attach to specific courses if needed.

Create Chatbot and Select Model
2

Build and Attach Knowledge Base

Create knowledge bases and upload documents from DigitalOcean Spaces. The system indexes documents using embedding models (GTE Large EN v1.5, All MiniLM L6 v2, or Multi QA MPNet Base) through async indexing jobs. Track indexing status and manage data sources. Attach one or more knowledge bases to your chatbot via attachedKbIds - the chatbot will use these documents as context for generating grounded responses.

Build and Attach Knowledge Base
3

Configure Widget and Deploy

Customize the chatbot widget appearance: theme color, position on page, launcher shape, and welcome message. Set display rules that control when and where the chatbot appears - configure visibility conditions and priority ordering. The chatbot is provisioned as a Gradient agent endpoint and deployed as an embeddable widget within the Noova platform.

Configure Widget and Deploy
4

Manage Billing and Monitor Usage

Set up token wallets with VND balance. Configure grant quotas to distribute budgets across teams using Equal Split, Shared Pool, or Shared Pool Per Cap modes. Monitor usage through logs showing input tokens, output tokens, and costs per chatbot and per user. Track session history through the Chat Session Viewer. Adjust model selection, parameters, and knowledge bases based on usage patterns and cost optimization.

Manage Billing and Monitor Usage

Complete Feature Breakdown

20+ LLM Models from 4 Providers

Select the optimal model for each chatbot based on capability, speed, and cost trade-offs.

  • OpenAI - GPT-4o, GPT-4o mini, GPT-5, GPT-5 Nano, GPT-5 Mini, o1, o3, o3-mini
  • Anthropic - Claude Opus 4, Claude Sonnet 4, Claude 3.7 Sonnet, Claude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3 Opus
  • DigitalOcean Gradient - Llama 3.3 70B, Llama 3.1 8B, DeepSeek R1 Distill Llama 70B, Mistral Nemo Instruct, GPT-oss 120B/20B
  • Model parameters: maxTokens (128-16,384), temperature (0-1), topP (0-1), topK (0-10)
  • Per-model pricing with separate input/output token rates

Knowledge Base with Document Embedding

Create knowledge bases that ground chatbot responses in your organization's actual documents.

  • KB Types - public, site, course, documents - control scope and visibility
  • Document upload from DigitalOcean Spaces - add files as data sources
  • 3 embedding models - GTE Large EN v1.5, All MiniLM L6 v2, Multi QA MPNet Base Dot v1
  • Async indexing jobs - background processing with status tracking (pending, processing, completed, failed)
  • Pending ingest queue - batch document processing pipeline
  • Attach multiple KBs to a single chatbot via attachedKbIds for combined context

Streaming & Non-Streaming Chat

Two chat modes to match different interaction patterns.

  • Streaming - SSE-based real-time token generation. Messages update incrementally as the model generates
  • Non-streaming - Standard request/response. Full answer returned at once
  • Streaming flags - streaming (in progress), streamDone (completed), streamError (failed) on message documents
  • Session management - conversations tracked in chat sessions with full message history
  • Chat Session Viewer - admin interface to review past conversations

Token Billing & Wallet System

Metered AI usage with VND-denominated billing, wallet management, and cost tracking.

  • Per-1K token pricing - input and output priced separately per model
  • VND wallets - organization-wide token balance with debit/credit transactions
  • Usage logs - track costs per chatbot, per user, and per model with timestamps
  • Minimum balance floor - configurable minimum (default 50 VND) blocks chat when wallet is low
  • Allocation orders - purchase token credits to fund wallets
  • Pre-authorization - safety multiplier estimates max cost before streaming begins

Grant & Quota Management

Distribute AI budgets across teams with flexible allocation models.

  • EQUAL_SPLIT - distribute budget equally across all members
  • SHARED_POOL - centralized quota consumed by all members
  • SHARED_POOL_PER_CAP - shared pool with per-member spending caps
  • Member modes - SNAPSHOT (fixed member list at creation) or DYNAMIC (members added over time)
  • Per-member tracking - individual usage within grants
  • Grant lifecycle - create, activate, monitor, and close grants

Chatbot Configuration & Display

Control chatbot identity, appearance, and visibility rules.

  • Chatbot types - System (platform-managed) and User (admin-created)
  • Scope - public (all users), enrolled (enrolled users), course (specific course context)
  • Course attachment - link chatbots to specific courses via courseIds array
  • Widget appearance - theme color, position, launcher shape, welcome message
  • Display rules - visibility conditions and priority ordering for multiple chatbots
  • 20 permissions - Chatbot (View/Create/Edit/Delete/Use/Train/Analytics) and KB (View/Create/Edit/Delete/Upload/Analytics) with All/Shared/Own granularity

Frequently Asked Questions

Common questions about AI Assistant

AI Assistant supports 20+ models from 4 providers: OpenAI (GPT-4o, GPT-4o mini, GPT-5, GPT-5 Nano/Mini, o1, o3, o3-mini), Anthropic (Claude Opus 4, Claude Sonnet 4, Claude 3.7 Sonnet, Claude 3.5 Sonnet/Haiku, Claude 3 Opus), and DigitalOcean Gradient (Llama 3.3 70B, Llama 3.1 8B, DeepSeek R1 Distill Llama 70B, Mistral Nemo Instruct, GPT-oss 120B/20B). Each model has different pricing, speed, and capability trade-offs.
Knowledge bases store your organization's documents for contextual AI responses. Upload documents from DigitalOcean Spaces as data sources. The system indexes them using embedding models (GTE Large EN v1.5, All MiniLM L6 v2, or Multi QA MPNet Base) through async indexing jobs. Once indexed, attach the knowledge base to chatbots via attachedKbIds. When users ask questions, the chatbot retrieves relevant document passages and uses them as context for generating grounded answers.
Streaming chat uses SSE (Server-Sent Events) to deliver model output in real-time. When a user sends a message, the server makes a streaming request to the LLM provider. As tokens are generated, they are incrementally appended to the message document. The client sees the response appearing word-by-word. Message flags track state: 'streaming' (in progress), 'streamDone' (complete), or 'streamError' (failed). Non-streaming mode is also available where the full response is returned at once.
AI usage is metered by tokens. Each model has separate per-1K token pricing for input (your message + context) and output (model response), denominated in VND. When a message is sent, the system pre-authorizes an estimated cost from the wallet, executes the chat, then charges the actual token usage. Usage logs track all costs by chatbot, user, and model. A configurable minimum balance floor (default 50 VND) prevents chat when the wallet is too low.
Grants distribute AI budgets to teams or individuals. Three modes: EQUAL_SPLIT (divide budget equally across all members), SHARED_POOL (centralized quota anyone can use), and SHARED_POOL_PER_CAP (shared pool with per-member spending caps). Member lists can be SNAPSHOT (fixed at grant creation) or DYNAMIC (members added over time). Grants track per-member usage and can be created, activated, monitored, and closed through the admin interface.
Yes. Each chatbot has a scope setting: 'public' (visible to all users), 'enrolled' (visible to enrolled users), or 'course' (visible only within specific courses). For course scope, attach the chatbot to specific courses via the courseIds array. You can also create course-specific knowledge bases containing only that course's materials, then attach them to the course-scoped chatbot for highly contextual assistance.
Each chatbot has 4 configurable model parameters: maxTokens (128-16,384, controls maximum response length), temperature (0-1, controls randomness - lower is more deterministic), topP (0-1, nucleus sampling threshold), and topK (0-10, limits token selection to top K candidates). These parameters let you fine-tune response behavior - use low temperature for factual Q&A, higher for creative tasks.
20 granular permissions split between Chatbot and Knowledge Base. Chatbot: View All/Shared/Own (visibility), Create, Edit All/Own, Delete All/Own, Use (interact with chatbots), Train (configure model), Analytics View. Knowledge Base: View All/Shared/Own, Create, Edit All/Own, Delete All/Own, Upload (add documents), Analytics View. All permissions are assigned through the role-based permission system.

Ready to Deploy AI Chatbots?

Start building AI chatbots with 20+ LLM models, document-grounded knowledge bases, streaming chat, token billing, and granular access control.

Our Partners

Start connecting your data with Noovaation

Noova Logo

Products

Noova Nền tảng giúp doanh nghiệp tạo và xây dựng hệ thống E-Learning cho đào tạo và phát triển nhân sự

Contact

  • VN-ELEARNING Technology Solutions JSC
  • 3rd Floor, No. 215 Giap Nhat, Thanh Xuan, Hanoi
  • Business Registration: 0109452237
  • Hotline: 1900 86 66 59