← Back to Documentation

Chat Widget

Embed a voice and text chat widget on any website with a single script tag. Same AI assistant, same natural conversation - right in your web app.

Quick Start

Add these two lines to your HTML, just before the closing </body> tag:

index.html
<!-- Add to your HTML -->
<script>
  window.voiceRailConfig = {
    assistantId: 'your-assistant-id',
    organizationId: 'your-organization-id'
  };
</script>
<script src="https://voicerail.ai/widget/voicerail-widget.js" async></script>

That's it! A chat bubble appears in the bottom-right corner. Users can click to open the widget and start a voice or text conversation with your AI assistant.

Features

Voice Mode

Tap to speak or enable voice activity detection for hands-free conversations. Uses your device microphone with real-time streaming.

Text Mode

Type messages when voice isn't practical. Toggle between modes at any time. Full chat history preserved.

Same AI Pipeline

Uses the same ConversationEngine as phone calls. Full persistence, billing, and conversation history.

Lightweight

Single 53KB (gzipped) JavaScript file with no external dependencies. CSS is inlined for zero layout shift.

Configuration Options

Customize the widget appearance and behavior through the config object:

Full Configuration
<script>
  window.voiceRailConfig = {
    // Required
    assistantId: 'asst_xxxx',
    organizationId: 'org_xxxx',

    // Appearance
    position: 'bottom-right',    // 'bottom-right' | 'bottom-left' | 'top-right' | 'top-left'
    theme: 'auto',               // 'light' | 'dark' | 'auto'
    primaryColor: '#3b82f6',     // Any hex color
    bubbleSize: 60,              // Pixels
    bubbleIcon: 'chat',          // 'chat' | 'mic' | 'headphones'

    // Behavior
    autoOpen: false,             // Open widget on page load
    defaultMode: 'voice',        // 'text' | 'voice'
    voiceActivityDetection: true, // Auto-detect speech for hands-free

    // Branding
    assistantName: 'Alex',       // Display name in widget header
    assistantAvatar: 'https://...', // URL to avatar image

    // Callbacks
    onOpen: () => console.log('Widget opened'),
    onClose: () => console.log('Widget closed'),
    onMessage: (msg) => console.log('Message:', msg),
    onError: (err) => console.error('Error:', err),
    onConnectionChange: (connected) => console.log('Connected:', connected)
  };
</script>
<script src="https://voicerail.ai/widget/voicerail-widget.js" async></script>

Configuration Reference

Required

OptionTypeDescription
assistantIdstringYour VoiceRail assistant ID
organizationIdstringYour VoiceRail organization ID

Appearance

OptionTypeDefaultDescription
positionstring'bottom-right'Widget position on screen
themestring'auto''light', 'dark', or 'auto' (follows system)
primaryColorstring'#3b82f6'Accent color for buttons and highlights
bubbleSizenumber60Size of the bubble button in pixels
bubbleIconstring'chat''chat', 'mic', or 'headphones'
assistantNamestringDisplay name in widget header
assistantAvatarstringURL to avatar image

Behavior

OptionTypeDefaultDescription
autoOpenbooleanfalseOpen widget automatically on page load
defaultModestring'voice''text' or 'voice'
voiceActivityDetectionbooleantrueAuto-detect speech for hands-free mode

JavaScript API

Control the widget programmatically after it loads:

Control API
// Programmatic control (available after widget loads)
window.VoiceRail.open();      // Open the widget panel
window.VoiceRail.close();     // Close the widget panel
window.VoiceRail.destroy();   // Remove widget from page

Callbacks

React to widget events with callback functions:

Message Callback
// Message callback receives transcript messages
window.voiceRailConfig = {
  // ...
  onMessage: (message) => {
    console.log({
      id: message.id,           // Unique message ID
      role: message.role,       // 'user' | 'assistant'
      text: message.text,       // Message content
      timestamp: message.timestamp,  // Date object
      wasInterrupted: message.wasInterrupted  // True if assistant was interrupted
    });

    // Example: Send to your analytics
    analytics.track('voicerail_message', {
      role: message.role,
      textLength: message.text.length
    });
  }
};

React / Next.js Integration

For React apps, load the widget in a useEffect hook and clean up on unmount:

App.tsx
// React integration
import { useEffect } from 'react';

function App() {
  useEffect(() => {
    // Set config before script loads
    window.voiceRailConfig = {
      assistantId: process.env.NEXT_PUBLIC_ASSISTANT_ID,
      organizationId: process.env.NEXT_PUBLIC_ORG_ID,
      theme: 'auto',
      defaultMode: 'voice'
    };

    // Load widget script
    const script = document.createElement('script');
    script.src = 'https://voicerail.ai/widget/voicerail-widget.js';
    script.async = true;
    document.body.appendChild(script);

    return () => {
      window.VoiceRail?.destroy();
      script.remove();
    };
  }, []);

  return <div>Your app content</div>;
}

Browser Support

The widget supports all modern browsers with WebSocket and Web Audio API:

  • • Chrome 80+
  • • Firefox 76+
  • • Safari 14.1+
  • • Edge 80+

Voice mode requires microphone permission. Text mode works without any permissions.

Security Considerations

  • • The widget connects via secure WebSocket (WSS) to voice.voicerail.ai
  • • Audio is streamed in real-time and not stored locally in the browser
  • • Conversation transcripts are stored in your VoiceRail account
  • • CORS is configured to allow embedding from any origin
  • • Usage is billed to the organization ID in your config

Next Steps