We’re investing in Fastino via Deepwater Venture Fund II based on the company’s vision to deliver task-optimized language models with industry-leading accuracy for every language-related task.
We believe Fastino will enable developers to do more with AI, through a family of models that are more accurate, faster, and safer than language models that exist today.
Deepwater participated in a $7M pre-seed round led by M12, Microsoft’s venture fund, and Insight Partners. Additional participants in the round include NEA, CRV and various notable angels (GitHub CEO, Google executives, and others).
Fastino has created a novel architecture for language models delivering accurate and controllable task-optimized LLMs that run on significantly less compute than other models currently in market:
- Enterprise deployment of AI would be occurring at a more rapid pace with reduced compute requirements and improved speed & accuracy.
- Models developed by Fastino run up to 1000 times faster than traditional generalist models on less compute (i.e., can run on CPUs in edge devices).
- Fastino combines cutting edge model architecture with intuitive developer tools for fine-tuning and validation.
- The initial Fastino team includes AI researchers and developers from Stanford, Oxford, Berkeley, Microsoft, and Google.
The rising tide of AI cannot be driven by data and compute alone; ongoing advancements in model architecture and related tools will be critical to achieving widespread implementation. Through Fastino’s work on industry-leading models, we are confident that the company will have an important role in the broader advancement and enterprise deployment of AI.