Mention rwkv v6 in the readmes. (#1784)

This commit is contained in:
Laurent Mazare 2024-03-01 08:58:30 +01:00 committed by GitHub
parent 979deaca07
commit 64d4038e4f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 4 additions and 4 deletions

View File

@ -79,7 +79,7 @@ We also provide a some command line based examples using state of the art models
- [StarCoder](./candle-examples/examples/bigcode/) and
[StarCoder2](./candle-examples/examples/starcoder2/): LLM specialized to code generation.
- [Qwen1.5](./candle-examples/examples/qwen/): Bilingual (English/Chinese) LLMs.
- [RWKV v5](./candle-examples/examples/rwkv/): An RNN with transformer level LLM
- [RWKV v5 and v6](./candle-examples/examples/rwkv/): An RNN with transformer level LLM
performance.
- [Replit-code-v1.5](./candle-examples/examples/replit-code/): a 3.3b LLM specialized for code completion.
- [Yi-6B / Yi-34B](./candle-examples/examples/yi/): two bilingual
@ -203,7 +203,7 @@ If you have an addition to this list, please submit a pull request.
- Bert.
- Yi-6B and Yi-34B.
- Qwen1.5.
- RWKV.
- RWKV v5 and v6.
- Quantized LLMs.
- Llama 7b, 13b, 70b, as well as the chat and code variants.
- Mistral 7b, and 7b instruct.

View File

@ -2,8 +2,8 @@
The [RWKV model](https://wiki.rwkv.com/) is a recurrent neural network model
with performance on par with transformer architectures. Several variants are
available, candle implements the v5 version and can be used with Eagle 7B([blog
post](https://blog.rwkv.com/p/eagle-7b-soaring-past-transformers)).
available, candle implements the v5 and v6 versions and can be used with
Eagle 7B([blog post](https://blog.rwkv.com/p/eagle-7b-soaring-past-transformers)).
```bash
$ cargo run --example rwkv --release -- --prompt "The smallest prime is "