Instructions to use AmazonScience/qanlu with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AmazonScience/qanlu with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="AmazonScience/qanlu")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("AmazonScience/qanlu") model = AutoModelForQuestionAnswering.from_pretrained("AmazonScience/qanlu") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 5f07755452de0fe0990491eb5d63706bca4f4169cab1d6e2575e517694410828
- Size of remote file:
- 496 MB
- SHA256:
- 77421a2d804c7fed66e6a85f06db65b2c7c62c9ec0c8795f24564c4531903ec9
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.