It runs the powerful Mixtral 8x7B, Mistral, Gemma, etc. natively on Apple Silicon. Created with privacy at its core.
🔗 https://privatellm.app
- Macs can run larger models (32k token-length windows) through Private LLM without the need for Nvidia RTX and without the need for a GPU.
- Adding AI to your workflow becomes easier! No code is needed, just your creativity and Apple shortcut instructions can perform prompt engineering.
- No Internet? Private LLM provides you with offline full-system text tools such as abstracting and grammar correction.
- The story of Private LLM begins with two brothers (from India) working to create a locally-running chatbot with privacy at its core.
Windows users own RTX Chat, and now Apple users own Private LLM. The future of AI is on devices.
Video: