# TuneSalon AI > Fine-tune your own AI model without writing code. Train on cloud GPUs or run locally for full privacy. TuneSalon AI (https://tunesalonai.com) is a web-based fine-tuning platform for non-technical users. Upload your data, pick a model, and train a custom AI — no coding, no hardware setup. Your fine-tuned model is saved as a lightweight adapter file (~50MB) that sits on top of the base model. The platform also offers a free desktop app for complete privacy — train and chat locally, nothing leaves your computer. For a full introduction, see: https://tunesalonai.com/resource/about For common questions, see: https://tunesalonai.com/resource/faq ## How It Works 1. Upload a training dataset (JSONL format) 2. Choose a base model from 10 curated open-source models (3B to 72B parameters) 3. Train on cloud GPUs (NVIDIA A100 80GB or B200 180GB) 4. Chat with your fine-tuned model, save it to your library, or publish it on the marketplace ## Features ### Train Fine-tune open-source language models on cloud GPUs. Settings use plain English — "Creativity" instead of "Temperature", "Training Thoroughness" instead of "Epochs". Live training progress with loss curves, step tracking, and logs. Save your adapter to the cloud library or export as GGUF for local use. ### Chat Load a base model and attach up to 5 trained adapters at once. Upload reference documents for RAG context (PDF, DOCX, TXT) with smart chunking that understands tables and page layouts. Streaming responses, adjustable system prompt, full session history. Export conversations as TXT, PDF, or JSONL. ### Dataset Generator Convert documents (PDF, DOCX, TXT) into training datasets. Six structure formats: Q&A, Dialogue, FAQ, Chapters, Paragraphs, and Custom. Choose preset sizes from 50 to 1,000 examples with quality scoring. Free to use — no credits required. ### Library Cloud storage for trained adapters. Organise with folders, download anytime. View GGUF exports. Track marketplace listings and earnings from published adapters. ### Marketplace Browse and buy community adapters by category: Coding, Legal, Medical, Creative Writing, Finance, Education, Customer Service, Marketing, Research, Translation, Data Analysis, and General. Each listing has ratings, reviews, and a Q&A section. Publish your own adapters and earn 85% of each sale. ### Users and Profiles Public profiles with avatar, banner, and bio. Share your published adapters and write blog posts with rich text, images, and file attachments. Search and browse other users. ### Store Credit packs: Starter (500 credits, $5), Standard (1,500 credits, $12), Plus (3,750 credits, $30), Power (7,500 credits, $60). Subscriptions: Pro ($25/month, 4,000 credits/month, 5GB storage), Max ($59/month, 10,000 credits/month, 20GB storage). ### Desktop App Free standalone application for Windows. Train and chat entirely on your own hardware with complete privacy — no data leaves your computer. Download from the website or GitHub. ## Supported Models All models are open-source with permissive licences (Apache 2.0 or MIT). Text-only language models, no multimodal. - Phi-4-mini (3.8B) — MIT - Qwen3-4B — Apache 2.0 - Mistral-7B-v0.3 — Apache 2.0 - Qwen3-8B — Apache 2.0 - Ministral-3-8B — Apache 2.0 - Phi-4 (14B) — MIT - Qwen3-14B — Apache 2.0 - Mistral-Small-24B — Apache 2.0 - Qwen3-32B — Apache 2.0 (B200 GPU only) - Qwen2.5-72B — Apache 2.0 (B200 GPU only) ## Benchmark Results TuneSalon fine-tuned models were tested against frontier AI (Claude Opus 4.6, Sonnet 4.6, GPT-5, GPT-5.4) across 5 real-world tasks. Each task used ~500 training examples, 3 epochs, LoRA rank 16. Evaluation: 250 prompts (50 per task), scored by Perplexity as independent judge on a 0-5 scale. Results (fine-tuned score / frontier best): - Customer Support (4B model): 94% accuracy vs 100% frontier — consistent tone, fewer off-script responses - Invoice Extraction (8B): 4.1/5 vs 4.8 — eliminated reasoning text in JSON output, reduced hallucination - E-Commerce Copy (14B): 4.1/5 vs 4.7 — fixed repetition, improved feature coverage to 85-90%, most factually conservative - Medical Extraction (32B): 4.05/5 vs 4.4 — closed half the gap to frontier, fixed empty treatment fields - Legal Rewrite (32B): 4.45/5 vs 4.7 — matched GPT-5.4, learned clause restatement and temporal detail preservation Key findings: Fine-tuning improves base models by ~0.3 points on average. Frontier models lead by 0.3-0.7 points. The gap is smallest on domain-specific tasks (medical, legal) and largest on general tasks (invoices, copywriting). Fine-tuning's strength is consistency, factual discipline, and style-learning — not raw capability. For tasks where you need reliable, repeatable output in a specific format or voice, fine-tuning closes most of the gap to frontier AI at a fraction of the cost. ## TuneSalon vs Other Platforms - **vs OpenAI fine-tuning**: OpenAI is the only platform where you cannot download your fine-tuned model. TuneSalon lets you export and own your adapter. TuneSalon supports open-source models only (no vendor lock-in). TuneSalon includes built-in dataset generation and a marketplace — OpenAI has neither. - **vs Unsloth**: Unsloth requires coding (Python/CLI). TuneSalon is fully no-code. Both support LoRA. Unsloth runs on your own GPU — TuneSalon provides cloud GPUs or a desktop app for local use. - **vs Hugging Face AutoTrain**: AutoTrain requires some technical knowledge and config. TuneSalon is designed for non-technical users with plain English settings. TuneSalon includes chat, RAG, dataset generation, and a marketplace in one platform. - **vs DIY (transformers + PEFT)**: Full flexibility but requires ML expertise, GPU setup, and significant development time. TuneSalon wraps the same underlying technology (LoRA via PEFT) in a no-code interface. ## Public Content Visitors can explore TuneSalon without creating an account. The tutorial, resource articles, system info, training interface, chat, dataset generator, and store pages are all browsable in view-only mode. Educational resources at /resource cover: what fine-tuning is, real-world use cases, how LoRA works, benchmark results with specific numbers, and a detailed platform comparison. ## Technical Details - Fine-tuning method: LoRA (Low-Rank Adaptation) - Adapters are lightweight (~50MB) and tied to their base model - GGUF export available for running models with llama.cpp locally - Cloud GPUs: NVIDIA A100 (80GB VRAM) and B200 (180GB VRAM) - RAG: Docling document processing with section-aware chunking, FAISS vector search - Built with React, FastAPI, Supabase, Stripe, and Modal (serverless GPU) ## Resource Pages - About TuneSalon AI: https://tunesalonai.com/resource/about - FAQ: https://tunesalonai.com/resource/faq - What Is Fine-Tuning?: https://tunesalonai.com/resource/what-is-finetuning - Where Fine-Tuning Matters: https://tunesalonai.com/resource/where-finetuning-matters - Our Method: https://tunesalonai.com/resource/our-method - Benchmark Results: https://tunesalonai.com/resource/benchmark - Platform Comparison: https://tunesalonai.com/resource/platform-comparison - Model Guide: https://tunesalonai.com/resource/model-guide - GPU Guide: https://tunesalonai.com/resource/gpu-guide - After You Export: https://tunesalonai.com/resource/post-export-guide - Tutorial: https://tunesalonai.com/tutorial ## Links - Website: https://tunesalonai.com - Sitemap: https://tunesalonai.com/sitemap.xml - Desktop app: https://github.com/Amblablah/tunesalon-ai-desktop/releases/latest - Support: support@tunesalonai.com