SecureSpeakAI Deepfake Detection API
Professional AI voice detection with 99.7% accuracy.
Analyze audio files and URLs to detect AI-generated speech and deepfakes. Built for developers who need reliable voice authenticity verification.
Getting Started
Get up and running with SecureSpeakAI in under 5 minutes. Here's everything you need to make your first API call.
Quick Start Guide
Follow these simple steps to start detecting AI-generated speech:
1. Get Your API Key
Sign up for an account and generate your API key in the dashboard.
2. Add Funds
Purchase credits through our secure Stripe payment system to start using the API.
3. Make Your First Request
Send an audio file to our analysis endpoint:
curl -X POST https://securespeak-api-1064980124131.us-central1.run.app/analyze_file \
-H "Authorization: Bearer YOUR_API_KEY" \
-F "file=@audio-sample.wav"
import requests
# Configure your API key
API_KEY = "your_api_key_here"
headers = {"Authorization": f"Bearer {API_KEY}"}
# Analyze an audio file
with open("audio-sample.wav", "rb") as audio_file:
response = requests.post(
"https://securespeak-api-1064980124131.us-central1.run.app/analyze_file",
headers=headers,
files={"file": audio_file}
)
result = response.json()
print(f"Authenticity Score: {result['authenticity_score']}")
print(f"Is Deepfake: {result['is_deepfake']}")
print(f"Confidence: {result['confidence']}")
const fs = require('fs');
const FormData = require('form-data');
const axios = require('axios');
const apiKey = 'your_api_key_here';
const audioPath = 'audio-sample.wav';
async function analyzeAudio() {
try {
const form = new FormData();
form.append('file', fs.createReadStream(audioPath));
const response = await axios.post(
'https://securespeak-api-1064980124131.us-central1.run.app/analyze_file',
form,
{
headers: {
'Authorization': `Bearer ${apiKey}`,
...form.getHeaders()
}
}
);
console.log('Authenticity Score:', response.data.authenticity_score);
console.log('Is Deepfake:', response.data.is_deepfake);
console.log('Confidence:', response.data.confidence);
} catch (error) {
console.error('Error:', error.response?.data || error.message);
}
}
analyzeAudio();
import java.io.*;
import java.net.http.*;
import java.nio.file.*;
public class SecureSpeakClient {
private static final String API_URL = "https://securespeak-api-1064980124131.us-central1.run.app";
private final String apiKey;
private final HttpClient httpClient;
public SecureSpeakClient(String apiKey) {
this.apiKey = apiKey;
this.httpClient = HttpClient.newHttpClient();
}
public String analyzeFile(String filePath) throws Exception {
// Read the audio file
byte[] audioData = Files.readAllBytes(Paths.get(filePath));
// Create multipart request
String boundary = "----WebKitFormBoundary" + System.currentTimeMillis();
String multipartBody = createMultipartBody(audioData, "audio-sample.wav", boundary);
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(API_URL + "/analyze_file"))
.header("Authorization", "Bearer " + apiKey)
.header("Content-Type", "multipart/form-data; boundary=" + boundary)
.POST(HttpRequest.BodyPublishers.ofString(multipartBody))
.build();
HttpResponse response = httpClient.send(request,
HttpResponse.BodyHandlers.ofString());
return response.body();
}
private String createMultipartBody(byte[] fileData, String fileName, String boundary) {
StringBuilder builder = new StringBuilder();
builder.append("--").append(boundary).append("\r\n");
builder.append("Content-Disposition: form-data; name=\"file\"; filename=\"")
.append(fileName).append("\"\r\n");
builder.append("Content-Type: audio/wav\r\n\r\n");
builder.append(new String(fileData));
builder.append("\r\n--").append(boundary).append("--\r\n");
return builder.toString();
}
public static void main(String[] args) throws Exception {
SecureSpeakClient client = new SecureSpeakClient("your_api_key_here");
String result = client.analyzeFile("audio-sample.wav");
System.out.println("Analysis Result: " + result);
}
}
4. Understand the Response
The API returns a detailed analysis of the audio:
{
"request_id": "req_a1b2c3d4e5f6",
"authenticity_score": 0.972,
"is_deepfake": false,
"confidence": "high",
"confidence_score": 97.2,
"classification": {
"label": "Authentic",
"raw_prediction": "Human",
"score_explanation": "Model prediction: 97.2% Human"
},
"analysis_time_ms": 145,
"audio_metadata": {
"duration_sec": 4.2,
"sample_rate": 44100,
"channels": 1,
"format": "wav",
"file_size_bytes": 352800
},
"detected_technologies": [],
"risk_factors": [],
"source_info": {
"endpoint": "/analyze_file",
"filename": "audio-sample.wav",
"source_type": "uploaded_file"
},
"timestamps": {
"received_at": "2024-01-15T10:30:45Z",
"analyzed_at": "2024-01-15T10:30:45Z"
},
"api_version": "1.2.0"
}
Authentication
The SecureSpeakAI API uses API keys to authenticate requests. You can view and manage your API keys in the Dashboard.
Your API keys have many privileges, so be sure to keep them secure. Do not share your API keys in publicly accessible areas such as GitHub, client-side code, etc.
Authentication Method
All API requests must include your API key in the request headers using Bearer authentication:
Authorization: Bearer YOUR_API_KEY
Getting Your API Key
- Create an account on SecureSpeakAI
- Navigate to the Dashboard
- Go to the API Keys section
- Generate a new API key
- Copy and securely store your key
Analyze File
Upload and analyze an audio file to detect if it contains AI-generated or deepfaked speech.
Request
Upload an audio file using multipart/form-data. The file should be included in the file
field.
Supported Audio Formats
WAV
- WAV format (recommended)MP3
- MP3 formatFLAC
- FLAC formatM4A/AAC
- M4A and AAC formatsOGG
- OGG Vorbis formatAIFF
- AIFF formatWMA
- Windows Media AudioOPUS
- Opus format
Audio processing is powered by librosa and FFmpeg, supporting most common audio formats. Files are automatically converted to WAV format for analysis.
Example Request
curl -X POST https://securespeak-api-1064980124131.us-central1.run.app/analyze_file \
-H "Authorization: Bearer YOUR_API_KEY" \
-F "file=@audio-sample.wav"
import requests
API_KEY = "your_api_key_here"
headers = {"Authorization": f"Bearer {API_KEY}"}
with open("audio-sample.wav", "rb") as f:
response = requests.post(
"https://securespeak-api-1064980124131.us-central1.run.app/analyze_file",
headers=headers,
files={"file": f}
)
result = response.json()
print(f"Is Deepfake: {result['is_deepfake']}")
print(f"Confidence: {result['confidence']}")
const FormData = require('form-data');
const fs = require('fs');
const axios = require('axios');
const form = new FormData();
form.append('file', fs.createReadStream('audio-sample.wav'));
const response = await axios.post(
'https://securespeak-api-1064980124131.us-central1.run.app/analyze_file',
form,
{
headers: {
'Authorization': `Bearer ${apiKey}`,
...form.getHeaders()
}
}
);
console.log(response.data);
import java.io.*;
import java.net.http.*;
import java.nio.file.*;
public class SecureSpeakAnalyzer {
private static final String API_URL = "https://securespeak-api-1064980124131.us-central1.run.app";
private final String apiKey;
private final HttpClient httpClient;
public SecureSpeakAnalyzer(String apiKey) {
this.apiKey = apiKey;
this.httpClient = HttpClient.newHttpClient();
}
public String analyzeFile(String filePath) throws Exception {
byte[] audioData = Files.readAllBytes(Paths.get(filePath));
String boundary = "----WebKitFormBoundary" + System.currentTimeMillis();
String multipartBody = createMultipartBody(audioData, "audio-sample.wav", boundary);
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(API_URL + "/analyze_file"))
.header("Authorization", "Bearer " + apiKey)
.header("Content-Type", "multipart/form-data; boundary=" + boundary)
.POST(HttpRequest.BodyPublishers.ofString(multipartBody))
.build();
HttpResponse response = httpClient.send(request,
HttpResponse.BodyHandlers.ofString());
return response.body();
}
private String createMultipartBody(byte[] fileData, String fileName, String boundary) {
StringBuilder builder = new StringBuilder();
builder.append("--").append(boundary).append("\r\n");
builder.append("Content-Disposition: form-data; name=\"file\"; filename=\"")
.append(fileName).append("\"\r\n");
builder.append("Content-Type: audio/wav\r\n\r\n");
builder.append(new String(fileData));
builder.append("\r\n--").append(boundary).append("--\r\n");
return builder.toString();
}
}
// Usage
SecureSpeakAnalyzer analyzer = new SecureSpeakAnalyzer("your_api_key_here");
String result = analyzer.analyzeFile("audio-sample.wav");
System.out.println("Result: " + result);
Response
Returns a JSON object with the analysis results.
Response Fields
Field | Type | Description |
---|---|---|
request_id | string | Unique identifier for the request |
authenticity_score | float | Score between 0.0 (fake) and 1.0 (authentic) |
is_deepfake | boolean | True if the audio is detected as fake |
confidence | string | Confidence level: "low", "medium", or "high" |
confidence_score | float | Numerical confidence score (0-100) |
classification | object | Detailed classification information |
analysis_time_ms | integer | Time taken to analyze the audio in milliseconds |
audio_metadata | object | Technical details about the audio file |
source_info | object | Information about the source and endpoint used |
Analyze URL
Analyze audio directly from URLs including YouTube, SoundCloud, and direct audio links.
Request
Send a JSON payload with the URL to analyze. The system will download and process the audio automatically.
Example Request
curl -X POST https://securespeak-api-1064980124131.us-central1.run.app/analyze_url \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com/audio.mp3"}'
import requests
API_KEY = "your_api_key_here"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
payload = {"url": "https://example.com/audio.mp3"}
response = requests.post(
"https://securespeak-api-1064980124131.us-central1.run.app/analyze_url",
headers=headers,
json=payload
)
result = response.json()
print(f"Is Deepfake: {result['is_deepfake']}")
print(f"Confidence: {result['confidence']}")
const axios = require('axios');
const response = await axios.post(
'https://securespeak-api-1064980124131.us-central1.run.app/analyze_url',
{ url: 'https://example.com/audio.mp3' },
{
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
}
}
);
console.log(response.data);
import java.net.http.*;
import java.net.URI;
public class SecureSpeakURLAnalyzer {
private static final String API_URL = "https://securespeak-api-1064980124131.us-central1.run.app";
private final String apiKey;
private final HttpClient httpClient;
public SecureSpeakURLAnalyzer(String apiKey) {
this.apiKey = apiKey;
this.httpClient = HttpClient.newHttpClient();
}
public String analyzeURL(String audioUrl) throws Exception {
String jsonPayload = String.format("{\"url\": \"%s\"}", audioUrl);
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(API_URL + "/analyze_url"))
.header("Authorization", "Bearer " + apiKey)
.header("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofString(jsonPayload))
.build();
HttpResponse response = httpClient.send(request,
HttpResponse.BodyHandlers.ofString());
return response.body();
}
}
// Usage
SecureSpeakURLAnalyzer analyzer = new SecureSpeakURLAnalyzer("your_api_key_here");
String result = analyzer.analyzeURL("https://example.com/audio.mp3");
System.out.println("Result: " + result);
Supported URL Types
X (Twitter)
- Audio/video postsInstagram
- Video posts and storiesTikTok
- Video posts (audio extracted)SoundCloud
- Track URLsDirect Audio
- MP3, WAV, OGG, FLAC linksOther Platforms
- Many other social media and audio platforms
Analyze Live Audio
Upload audio files for real-time analysis with per-second billing. Perfect for live streaming, real-time verification, and short audio chunks.
Request
Upload an audio file using multipart/form-data. Billing is calculated based on the duration of the audio file at $0.032 per second.
Unlike other endpoints, live analysis charges based on the actual duration of your audio. A 10-second audio file costs $0.32, while a 1-second file costs $0.032.
Example Request
curl -X POST https://securespeak-api-1064980124131.us-central1.run.app/analyze_live \
-H "Authorization: Bearer YOUR_API_KEY" \
-F "file=@audio-sample.wav"
import requests
API_KEY = "your_api_key_here"
headers = {"Authorization": f"Bearer {API_KEY}"}
with open("audio-sample.wav", "rb") as audio_file:
response = requests.post(
"https://securespeak-api-1064980124131.us-central1.run.app/analyze_live",
headers=headers,
files={"file": audio_file}
)
result = response.json()
print(f"Is Deepfake: {result['is_deepfake']}")
print(f"Confidence: {result['confidence']}")
print(f"Duration: {result['audio_metadata']['duration_sec']} seconds")
print(f"Cost: ${result['audio_metadata']['duration_sec'] * 0.032:.3f}")
const fs = require('fs');
const FormData = require('form-data');
const axios = require('axios');
const form = new FormData();
form.append('file', fs.createReadStream('audio-sample.wav'));
const response = await axios.post(
'https://securespeak-api-1064980124131.us-central1.run.app/analyze_live',
form,
{
headers: {
'Authorization': `Bearer ${apiKey}`,
...form.getHeaders()
}
}
);
console.log('Result:', response.data);
console.log(`Duration: ${response.data.audio_metadata.duration_sec} seconds`);
console.log(`Cost: $${(response.data.audio_metadata.duration_sec * 0.032).toFixed(3)}`);
import java.io.*;
import java.net.http.*;
import java.nio.file.*;
public class SecureSpeakLiveAnalyzer {
private static final String API_URL = "https://securespeak-api-1064980124131.us-central1.run.app";
private final String apiKey;
private final HttpClient httpClient;
public SecureSpeakLiveAnalyzer(String apiKey) {
this.apiKey = apiKey;
this.httpClient = HttpClient.newHttpClient();
}
public String analyzeLiveAudio(byte[] audioData) throws Exception {
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(API_URL + "/analyze_live"))
.header("Authorization", "Bearer " + apiKey)
.header("Content-Type", "audio/wav")
.POST(HttpRequest.BodyPublishers.ofByteArray(audioData))
.build();
HttpResponse response = httpClient.send(request,
HttpResponse.BodyHandlers.ofString());
return response.body();
}
public String analyzeLiveAudioFromFile(String filePath) throws Exception {
byte[] audioData = Files.readAllBytes(Paths.get(filePath));
return analyzeLiveAudio(audioData);
}
}
// Usage
SecureSpeakLiveAnalyzer analyzer = new SecureSpeakLiveAnalyzer("your_api_key_here");
String result = analyzer.analyzeLiveAudioFromFile("audio-sample.wav");
System.out.println("Result: " + result);
WebSocket Streaming
Real-time audio analysis using WebSocket connections for continuous streaming and live audio processing.
WebSocket Events
The WebSocket connection supports the following events for real-time audio streaming:
1. Connect
Establish WebSocket connection:
const socket = io('https://securespeak-api-1064980124131.us-central1.run.app');
socket.on('connect', () => {
console.log('Connected to SecureSpeakAI WebSocket');
});
2. Authenticate
Authenticate your WebSocket connection with your API key:
// Send authentication
socket.emit('authenticate', {
api_key: 'your_api_key_here'
});
// Handle authentication result
socket.on('auth_result', (data) => {
if (data.status === 'success') {
console.log('WebSocket authenticated successfully');
} else {
console.error('Authentication failed:', data.message);
}
});
3. Send Audio Frames
Send audio data for real-time analysis. Each frame is billed based on its duration:
// Send audio frame (raw WAV bytes)
const audioBuffer = new ArrayBuffer(/* your audio data */);
socket.emit('audio_frame', audioBuffer);
// Receive prediction results
socket.on('prediction', (result) => {
if (result.error) {
console.error('Analysis error:', result.error);
} else {
console.log('Analysis result:', result);
console.log(`Is Deepfake: ${result.is_deepfake}`);
console.log(`Confidence: ${result.confidence_score}%`);
console.log(`Duration: ${result.audio_metadata.duration_sec}s`);
}
});
Each audio frame sent via WebSocket is billed individually based on its duration at $0.032 per second. This ensures you only pay for the audio you actually process.
Complete WebSocket Example
const io = require('socket.io-client');
const fs = require('fs');
const socket = io('https://securespeak-api-1064980124131.us-central1.run.app');
socket.on('connect', () => {
console.log('Connected to SecureSpeakAI');
// Authenticate with API key
socket.emit('authenticate', {
api_key: 'your_api_key_here'
});
});
socket.on('auth_result', (data) => {
if (data.status === 'success') {
console.log('Authenticated successfully');
// Start sending audio frames
const audioData = fs.readFileSync('audio-chunk.wav');
socket.emit('audio_frame', audioData);
} else {
console.error('Authentication failed:', data.message);
}
});
socket.on('prediction', (result) => {
if (result.error) {
console.error('Analysis error:', result.error);
} else {
console.log(`Result: ${result.is_deepfake ? 'DEEPFAKE' : 'AUTHENTIC'}`);
console.log(`Confidence: ${result.confidence_score}%`);
console.log(`Cost: $${(result.audio_metadata.duration_sec * 0.032).toFixed(3)}`);
}
});
socket.on('disconnect', () => {
console.log('Disconnected from SecureSpeakAI');
});
Management Endpoints
Administrative endpoints for managing API keys, viewing usage statistics, and accessing billing information.
Get User Balance
Retrieve your current account balance and usage statistics. Requires Firebase authentication.
curl -X GET https://securespeak-api-1064980124131.us-central1.run.app/api/billing/balance \
-H "Authorization: Bearer YOUR_FIREBASE_ID_TOKEN" \
-H "X-Firebase-Auth: true"
Response
{
"balance": 45.32,
"usage": {
"file_calls": 150,
"url_calls": 75,
"live_seconds": 1250.5
},
"pricing": {
"analyze_file": 0.018,
"analyze_url": 0.025,
"analyze_live": 0.032
},
"pricing_notes": {
"analyze_file": "Per file analysis",
"analyze_url": "Per URL analysis",
"analyze_live": "Per second of audio for live analysis"
}
}
Get API Keys
List all API keys associated with your account.
curl -X GET https://securespeak-api-1064980124131.us-central1.run.app/api/keys \
-H "Authorization: Bearer YOUR_API_KEY"
Get API Key Statistics
Get detailed usage statistics for a specific API key, including daily breakdowns and endpoint usage.
curl -X GET https://securespeak-api-1064980124131.us-central1.run.app/api/key-stats/your_key_id \
-H "Authorization: Bearer YOUR_API_KEY"
Get Analysis History
Retrieve detailed analysis history for an API key with pagination support.
curl -X GET "https://securespeak-api-1064980124131.us-central1.run.app/api/analysis-history/your_key_id?limit=20&offset=0" \
-H "Authorization: Bearer YOUR_API_KEY"
Query Parameters
Parameter | Type | Description |
---|---|---|
limit | integer | Number of records to return (max 100, default 20) |
offset | integer | Number of records to skip (default 0) |
Live Calls & KYC Integration
SecureSpeakAI's live analysis endpoint is perfect for real-time voice verification in banking, fintech, and high-security applications where caller authenticity is critical.
Banking KYC
Verify customer identity during phone banking sessions and high-value transactions.
Fraud Prevention
Detect voice cloning and deepfake attacks in real-time during customer service calls.
Call Center Security
Protect against social engineering attacks with continuous voice authenticity monitoring.
Real-Time Voice Verification
Implement continuous voice verification during live calls using 7-second audio chunks for optimal accuracy and performance.
import pyaudio
import wave
import requests
import threading
from io import BytesIO
class LiveVoiceVerifier:
def __init__(self, api_key):
self.api_key = api_key
self.api_url = "https://securespeak-api-1064980124131.us-central1.run.app"
self.audio = pyaudio.PyAudio()
self.is_recording = False
def start_verification(self):
self.is_recording = True
thread = threading.Thread(target=self._continuous_analysis)
thread.daemon = True
thread.start()
def _continuous_analysis(self):
while self.is_recording:
# Record 7-second chunk
audio_data = self._record_chunk(duration=7)
wav_data = self._to_wav(audio_data)
# Analyze with API
result = self._analyze_audio(wav_data)
# Handle result
if result['is_deepfake']:
print(f"🚨 DEEPFAKE DETECTED - Confidence: {result['confidence_score']}%")
# Trigger security alert
else:
print(f"✅ Voice Authentic - Confidence: {result['confidence_score']}%")
def _record_chunk(self, duration):
stream = self.audio.open(format=pyaudio.paInt16, channels=1,
rate=44100, input=True, frames_per_buffer=1024)
frames = []
for _ in range(int(44100 / 1024 * duration)):
frames.append(stream.read(1024))
stream.close()
return b''.join(frames)
def _to_wav(self, audio_data):
buffer = BytesIO()
with wave.open(buffer, 'wb') as wav:
wav.setnchannels(1)
wav.setsampwidth(self.audio.get_sample_size(pyaudio.paInt16))
wav.setframerate(44100)
wav.writeframes(audio_data)
return buffer.getvalue()
def _analyze_audio(self, wav_data):
response = requests.post(
f"{self.api_url}/analyze_live",
headers={'Authorization': f'Bearer {self.api_key}', 'Content-Type': 'audio/wav'},
data=wav_data
)
return response.json()
# Usage
verifier = LiveVoiceVerifier('your_api_key')
verifier.start_verification()
const mic = require('mic');
const axios = require('axios');
const fs = require('fs');
class LiveVoiceVerifier {
constructor(apiKey) {
this.apiKey = apiKey;
this.apiUrl = 'https://securespeak-api-1064980124131.us-central1.run.app';
this.isRecording = false;
}
startVerification() {
this.isRecording = true;
this.continuousAnalysis();
}
continuousAnalysis() {
if (!this.isRecording) return;
// Record 7-second chunk
this.recordChunk(7000).then(audioBuffer => {
// Analyze with API
return this.analyzeAudio(audioBuffer);
}).then(result => {
// Handle result
if (result.is_deepfake) {
console.log(`🚨 DEEPFAKE DETECTED - Confidence: ${result.confidence_score}%`);
} else {
console.log(`✅ Voice Authentic - Confidence: ${result.confidence_score}%`);
}
// Continue analysis
setTimeout(() => this.continuousAnalysis(), 1000);
}).catch(console.error);
}
recordChunk(duration) {
return new Promise((resolve) => {
const micInstance = mic({ rate: '44100', channels: '1', debug: false });
const micInputStream = micInstance.getAudioStream();
const chunks = [];
micInputStream.on('data', data => chunks.push(data));
micInstance.start();
setTimeout(() => {
micInstance.stop();
resolve(Buffer.concat(chunks));
}, duration);
});
}
async analyzeAudio(audioBuffer) {
const response = await axios.post(
`${this.apiUrl}/analyze_live`,
audioBuffer,
{
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'audio/wav'
}
}
);
return response.data;
}
}
// Usage
const verifier = new LiveVoiceVerifier('your_api_key');
verifier.startVerification();
# Record and analyze in real-time
#!/bin/bash
API_KEY="your_api_key"
API_URL="https://securespeak-api-1064980124131.us-central1.run.app/analyze_live"
while true; do
# Record 7-second audio chunk
ffmpeg -f avfoundation -i ":0" -t 7 -y temp_audio.wav
# Send to API for analysis
response=$(curl -s -X POST "$API_URL" \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: audio/wav" \
--data-binary @temp_audio.wav)
# Parse and display result
is_deepfake=$(echo "$response" | jq -r '.is_deepfake')
confidence=$(echo "$response" | jq -r '.confidence_score')
if [ "$is_deepfake" = "true" ]; then
echo "🚨 DEEPFAKE DETECTED - Confidence: ${confidence}%"
else
echo "✅ Voice Authentic - Confidence: ${confidence}%"
fi
# Clean up temp file
rm temp_audio.wav
# Wait before next chunk
sleep 1
done
package main
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
"time"
)
type LiveVoiceVerifier struct {
APIKey string
APIURL string
}
type AnalysisResult struct {
IsDeepfake bool `json:"is_deepfake"`
ConfidenceScore float64 `json:"confidence_score"`
RequestID string `json:"request_id"`
}
func NewLiveVoiceVerifier(apiKey string) *LiveVoiceVerifier {
return &LiveVoiceVerifier{
APIKey: apiKey,
APIURL: "https://securespeak-api-1064980124131.us-central1.run.app",
}
}
func (v *LiveVoiceVerifier) StartVerification() {
for {
// Record 7-second audio chunk (implementation depends on audio library)
audioData := v.recordChunk(7 * time.Second)
// Analyze with API
result, err := v.analyzeAudio(audioData)
if err != nil {
fmt.Printf("Error: %v\n", err)
continue
}
// Handle result
if result.IsDeepfake {
fmt.Printf("🚨 DEEPFAKE DETECTED - Confidence: %.1f%%\n", result.ConfidenceScore)
} else {
fmt.Printf("✅ Voice Authentic - Confidence: %.1f%%\n", result.ConfidenceScore)
}
time.Sleep(1 * time.Second)
}
}
func (v *LiveVoiceVerifier) analyzeAudio(audioData []byte) (*AnalysisResult, error) {
req, err := http.NewRequest("POST", v.APIURL+"/analyze_live", bytes.NewReader(audioData))
if err != nil {
return nil, err
}
req.Header.Set("Authorization", "Bearer "+v.APIKey)
req.Header.Set("Content-Type", "audio/wav")
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result AnalysisResult
err = json.NewDecoder(resp.Body).Decode(&result)
return &result, err
}
func (v *LiveVoiceVerifier) recordChunk(duration time.Duration) []byte {
// Implementation depends on your audio capture library
// This is a placeholder - use libraries like PortAudio bindings
return []byte{}
}
func main() {
verifier := NewLiveVoiceVerifier("your_api_key")
verifier.StartVerification()
}
import java.net.http.*;
import java.util.concurrent.*;
import java.time.Duration;
import com.fasterxml.jackson.databind.ObjectMapper;
public class LiveVoiceVerifier {
private static final String API_URL = "https://securespeak-api-1064980124131.us-central1.run.app";
private final String apiKey;
private final HttpClient httpClient;
private final ObjectMapper objectMapper;
private volatile boolean isRecording = false;
public LiveVoiceVerifier(String apiKey) {
this.apiKey = apiKey;
this.httpClient = HttpClient.newHttpClient();
this.objectMapper = new ObjectMapper();
}
public void startVerification() {
this.isRecording = true;
CompletableFuture.runAsync(this::continuousAnalysis);
}
private void continuousAnalysis() {
while (isRecording) {
try {
// Record 7-second chunk (implementation depends on audio library)
byte[] audioData = recordChunk(Duration.ofSeconds(7));
// Analyze with API
AnalysisResult result = analyzeAudio(audioData);
// Handle result
if (result.isDeepfake) {
System.out.printf("🚨 DEEPFAKE DETECTED - Confidence: %.1f%%\n",
result.confidenceScore);
} else {
System.out.printf("✅ Voice Authentic - Confidence: %.1f%%\n",
result.confidenceScore);
}
Thread.sleep(1000); // Wait 1 second before next chunk
} catch (Exception e) {
System.err.println("Error during verification: " + e.getMessage());
}
}
}
private AnalysisResult analyzeAudio(byte[] audioData) throws Exception {
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(API_URL + "/analyze_live"))
.header("Authorization", "Bearer " + apiKey)
.header("Content-Type", "audio/wav")
.POST(HttpRequest.BodyPublishers.ofByteArray(audioData))
.build();
HttpResponse response = httpClient.send(request,
HttpResponse.BodyHandlers.ofString());
return objectMapper.readValue(response.body(), AnalysisResult.class);
}
private byte[] recordChunk(Duration duration) {
// Implementation depends on your audio capture library
// This is a placeholder - use libraries like Java Sound API
return new byte[0];
}
public void stopVerification() {
this.isRecording = false;
}
public static class AnalysisResult {
public boolean isDeepfake;
public double confidenceScore;
public String requestId;
}
}
// Usage
LiveVoiceVerifier verifier = new LiveVoiceVerifier("your_api_key");
verifier.startVerification();
Deepfake Detection Use Cases
Social Engineering Defense
Protect against voice cloning attacks targeting customer service representatives and executives.
Fraud Prevention
Detect deepfake voices attempting unauthorized access to accounts or sensitive information.
Content Verification
Verify authenticity of audio content for media platforms and news organizations.
Real-World Applications
Scenario | Threat Detection | Protection Level |
---|---|---|
CEO Impersonation Attacks | Voice cloning for financial fraud | Real-time detection during calls |
Customer Service Scams | Deepfake voices bypassing verification | Continuous monitoring of support calls |
Media Manipulation | Fake news and disinformation campaigns | Content authenticity verification |
Identity Theft | Synthetic voices for account takeover | Multi-factor authentication enhancement |
Political Deepfakes | Fake speeches and statements | Public figure voice verification |
Romance Scams | AI-generated voices in dating fraud | Social platform safety features |
Billing & Usage
SecureSpeakAI uses a pay-per-use model where you purchase credits and consume them with each API call.
Pricing
Each successful API call consumes credits from your account balance. Failed requests are not charged.
You only pay for successful analyses. Failed requests due to invalid audio or system errors are not charged to your account.
Managing Your Balance
You can check your current balance and add funds through the dashboard:
curl -X GET https://securespeak-api-1064980124131.us-central1.run.app/api/billing/balance \
-H "Authorization: Bearer YOUR_FIREBASE_ID_TOKEN" \
-H "X-Firebase-Auth: true"
Response Example
{
"balance": 45.32,
"usage": {
"file_calls": 150,
"url_calls": 75,
"live_seconds": 1250.5
},
"pricing": {
"analyze_file": 0.018,
"analyze_url": 0.025,
"analyze_live": 0.032
},
"pricing_notes": {
"analyze_file": "Per file analysis",
"analyze_url": "Per URL analysis",
"analyze_live": "Per second of audio for live analysis"
}
}
Live analysis usage is now tracked in seconds rather than calls. The live_seconds
field shows the total duration of audio processed through live analysis.
Adding Funds
Purchase additional credits through our secure Stripe integration in the dashboard or via the API:
Pricing & Quotas
SecureSpeakAI offers transparent pay-per-use pricing. You're only charged for successful API calls.
API Call Pricing
Each successful analysis is charged based on the endpoint used:
Endpoint | Cost per Request | Average Processing Time | Best For |
---|---|---|---|
/analyze_file | $0.018 | 2-5 seconds | Batch processing, file uploads |
/analyze_url | $0.025 | 3-8 seconds | Social media monitoring, content verification |
/analyze_live | $0.032 per second | 1-3 seconds | Real-time analysis, live calls, streaming audio |
Cost Examples
1,000 File Analyses
$18
Perfect for batch processing audio files
1,000 URL Analyses
$25
Ideal for social media monitoring
1,000 Seconds Live Analysis
$32
Real-time voice verification (about 16.7 minutes)
You're only charged for successful analyses. Failed requests due to invalid audio, network errors, or server issues are not charged to your account.
Payment & Billing
Secure Payments
All payments processed through Stripe with enterprise-grade security and PCI compliance.
Pay-Per-Use
Simple pricing model - you only pay for what you use with no monthly fees or commitments.
Detailed Invoicing
Comprehensive usage reports and invoices for easy expense tracking and compliance.
Usage Monitoring
Track your usage and API performance through the dashboard:
curl -X GET https://securespeak-api-1064980124131.us-central1.run.app/api/billing/usage \
-H "Authorization: Bearer YOUR_FIREBASE_ID_TOKEN"
import requests
headers = {"Authorization": f"Bearer {firebase_token}"}
response = requests.get(
"https://securespeak-api-1064980124131.us-central1.run.app/api/billing/usage",
headers=headers
)
usage_data = response.json()
print(f"Account balance: ${usage_data['account_balance']}")
print(f"Total requests this month: {usage_data['monthly_requests']}")
print(f"Total cost this month: ${usage_data['monthly_cost']}")
const axios = require('axios');
const response = await axios.get(
'https://securespeak-api-1064980124131.us-central1.run.app/api/billing/usage',
{
headers: {
'Authorization': `Bearer ${firebaseToken}`
}
}
);
console.log('Account balance:', response.data.account_balance);
console.log('Monthly requests:', response.data.monthly_requests);
console.log('Monthly cost:', response.data.monthly_cost);
Getting Started
Ready to start using SecureSpeakAI? Follow these steps:
- Create Account: Sign up for a free SecureSpeakAI account
- Add Funds: Add money to your account through the dashboard
- Generate API Key: Create your API key in the dashboard
- Start Analyzing: Make your first API call and detect deepfakes!
Start using our pay-per-use pricing model immediately. Credit card required for setup and billing.
Error Handling
The SecureSpeakAI API uses conventional HTTP response codes to indicate the success or failure of an API request.
Code | Description |
---|---|
200 - OK | Everything worked as expected |
400 - Bad Request | The request was unacceptable, often due to invalid parameters |
401 - Unauthorized | No valid API key provided |
402 - Payment Required | Insufficient balance to process the request |
404 - Not Found | The requested resource doesn't exist |
429 - Too Many Requests | Rate limit exceeded |
500+ - Server Errors | Something went wrong on our end |
Error Response Format
{
"error": "Invalid audio format or processing failed",
"code": 400,
"timestamp": "2024-01-15T10:30:45Z"
}
Payment Required Error (402)
For live analysis endpoints, insufficient balance errors include detailed cost breakdown:
{
"error": "Insufficient balance. Required: $0.320 (10.00s × $0.032/s), Available: $0.150",
"code": 402,
"details": {
"required_amount": 0.320,
"current_balance": 0.150,
"deficit": 0.170,
"duration_seconds": 10.0,
"per_second_cost": 0.032
}
}
Common Errors
- Invalid API Key: Ensure your API key is correct and active
- Insufficient Balance: Add funds to your account through the dashboard
- Invalid Audio Format: Check that your audio file is in a supported format
- File Too Large: Ensure your audio file is under the size limit
- Invalid URL: Verify the URL is accessible and contains audio content