Documentation

Comprehensive guides, tutorials, and API references for NeuraNote

Quickstart

Get started in 5 minutes

API Reference

Complete endpoint docs

Guides

In-depth tutorials

Getting Started

Quick Start Guide

Get up and running in 5 minutes

Account Setup

Configure your workspace and preferences

Uploading Documents

Learn how to add and process files

Understanding AI Features

Overview of notes, flashcards, and quizzes

Features

AI Editor

Advanced editing with AI assistance

Document Processing

How NeuraNote extracts and indexes content

AI-Generated Notes

Creating comprehensive study notes

Flashcards & SRS

Spaced repetition system explained

Quiz Generation

Automatic quiz creation from your materials

AI Tutor

Interactive chat with context from your docs

Audio Transcription

Convert lectures to searchable text

Text-to-Speech

Listen to your notes on the go

AI & Technology

How RAG Works

Understanding Retrieval-Augmented Generation

Citation System

How we ensure accuracy with sources

Vector Search

Semantic search across your documents

Embedding Generation

Technical details on content indexing

AI Models Used

GPT-4, GPT-4o-mini, and other integrations

API Reference

Authentication

API key management and security

REST Endpoints

Complete API endpoint documentation

Webhooks

Event-driven integrations

Rate Limits

Understanding API quotas

Code Examples

Sample implementations in multiple languages

Integrations

Anki Export

Sync flashcards with Anki

Notion Integration

Export notes to Notion

Google Drive

Import files from Drive

Zapier

Automate workflows (coming soon)

Security & Privacy

Data Encryption

How we protect your content

Privacy Policy

What data we collect and why

Compliance

GDPR, CCPA, and education standards

Data Deletion

Removing your account and data

System Architecture

NeuraNote uses a hybrid architecture that combines serverless control plane (Next.js on Vercel) with containerized workers for heavy compute tasks. This ensures low latency for user interactions while maintaining cost-effective scaling for AI processing.

Control Plane (Serverless)
    ↓
Queue Layer (Redis)
    ↓
Data Plane (Docker Workers)
    ↓
Storage (MongoDB + R2)

All AI responses use Retrieval-Augmented Generation (RAG) to ground answers in your uploaded documents. We combine vector embeddings with metadata filtering for hybrid search, ensuring accurate and cited responses.

Documentation Under Development

We're actively building comprehensive documentation for all features and APIs. Check back soon for detailed guides, code examples, and best practices.