Former OpenAI CTO Launches Tinker, a New Training API for Language Models
Former OpenAI CTO Mira Murati has launched Tinker, a new training API for language models, through her company Thinking Machines. This move suggests Murati doesn't believe in OpenAI models' short-term dominance or an imminent super-AI breakthrough.
Tinker supports various open-weight models, including Meta's Llama and Alibaba's Qwen, even the large Qwen-235B-A22B. It aims to lower the barrier for developing own models on an open-source basis, especially for teams lacking substantial computational resources.
Tinker is supplemented by the 'Tinker Cookbook', a library of typical post-training methods to help avoid common fine-tuning mistakes. The underlying technology is based on LoRA (Low-Rank Adaptation), enabling multiple fine-tuning processes to run in parallel on the same hardware. This allows researchers and developers to train or fine-tune their own models on open weights without worrying about underlying infrastructure.
Murati left OpenAI in the fall of 2024 due to reported internal tensions. She, along with other former OpenAI employees, seems to bet that fine-tuned open-weight models will offer more flexibility and economic benefit than proprietary models like GPT-5. Tinker is a managed service running on Thinking Machines' internal compute clusters, handling resource management, fault tolerance, and scheduling.
Tinker, launched by former OpenAI CTO Mira Murati, aims to democratize language model development. By supporting open-weight models and providing a user-friendly platform, it could reshape the AI landscape, encouraging more innovation and collaboration.
Read also:
- Minimal Essential Synthetic Intelligences Enterprise: Essential Minimum Agents
- Tesla is reportedly staying away from the solid-state battery trend, as suggested by indications from CATL and Panasonic.
- Standard Nuclear & Framatome Join Forces to Boost TRISO Fuel Production by 2027
- Sonatype Streamlines Cross-Platform App Installations with Docker and Chef