Text Generation
Transformers
PyTorch
code
gpt2
custom_code
Eval Results (legacy)
text-generation-inference
Instructions to use bigcode/santacoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bigcode/santacoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="bigcode/santacoder", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("bigcode/santacoder", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("bigcode/santacoder", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use bigcode/santacoder with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bigcode/santacoder" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigcode/santacoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/bigcode/santacoder
- SGLang
How to use bigcode/santacoder with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bigcode/santacoder" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigcode/santacoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "bigcode/santacoder" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigcode/santacoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use bigcode/santacoder with Docker Model Runner:
docker model run hf.co/bigcode/santacoder
Adding `safetensors` variant of this model
#45 opened about 2 years ago
by
SFconvertbot
Adding Evaluation Results
#44 opened over 2 years ago
by
leaderboard-pr-bot
Dataset used to train SantaCoder
#43 opened over 2 years ago
by
nihaljn
Token inconsistency with Starcoder: fim_ or fim-
4
#41 opened over 2 years ago
by
yuryya
Question about model architecture
1
#40 opened almost 3 years ago
by
sh0416
Stopping criteria
#39 opened almost 3 years ago
by
Mariusbrm
Deployment from tarball in sagemaker gives error requiring trust_remote_code
#37 opened almost 3 years ago
by
grohj
How to reproduce Table 7?
#36 opened almost 3 years ago
by
sh0416
Adding `safetensors` variant of this model
#35 opened almost 3 years ago
by
andyh1469
I want add a program language of santacoder, use pretrain_gpt_1B_santacoder.sh scipt file
👍 2
2
#34 opened almost 3 years ago
by
iawen
how to download fim benchmark data?
#33 opened almost 3 years ago
by
zxyscz
SantaCoder Hyperparameter Insights
👍 1
#30 opened about 3 years ago
by
nandovallec
Saving and loading bigcode/santacoder model on a mac results in "ModuleNotFoundError: No module named 'transformers_modules'
1
#29 opened about 3 years ago
by
kanandk
Deploying bigcode/santacoder on sagemaker gives an error
#28 opened about 3 years ago
by
kanandk
Question: MHA to MQA Conversion
#27 opened about 3 years ago
by
kirazT
fix sequence length in santacoder and introduce new model type
#23 opened about 3 years ago
by
mayank-mishra
num_return_sequences and num_beams > 1
1
#17 opened over 3 years ago
by
tlphams