llama.go is like llama.cpp in pure Golang. The code of the project is based on the legendary ggml.cpp framework of Georgi Gerganov written in C++ with the same attitude to performance and elegance. Both models store FP32 weights, so you'll needs at least 32Gb of RAM (not VRAM or GPU RAM) for LLaMA-7B. Double to 64Gb for LLaMA-13B.
Features
- Tensor math in pure Golang
- Implement LLaMA neural net architecture and model loading
- Test with smaller LLaMA-7B model
- Be sure Go inference works exactly same way as C++
- Let Go shine! Enable multi-threading and messaging to boost performance
- Cross-patform compatibility with Mac, Linux and Windows
License
MIT LicenseFollow LLaMA.go
Other Useful Business Software
Our Free Plans just got better! | Auth0
You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of LLaMA.go!