Image-to-Text
Transformers
PyTorch
Safetensors
English
git
image-text-to-text
vision
image-captioning
Instructions to use microsoft/git-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use microsoft/git-base with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "image-to-text" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("image-to-text", model="microsoft/git-base")# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("microsoft/git-base") model = AutoModelForImageTextToText.from_pretrained("microsoft/git-base") - Notebooks
- Google Colab
- Kaggle
How to increase the max length of the output?
#7
by AeroDEmi - opened
I want to fine-tune this model but some of my targets have more than 1024 tokens.
I thought that maybe with: processor.tokenizer.model_max_length = 2048 and model.config.max_position_embeddings = 2048 that should do it...
But, I'm facing a dimension error:The size of tensor a (2048) must match the size of tensor b (1024) at non-singleton dimension 1