TransformersVisualQandA
Visual Q&A task
Module: implementation.tasks
Default predictor
model = AutoModelForImageClassification.from_pretrained(
"dandelin/vilt-b32-finetuned-vqa"
)
predictor=TransformersModel(
TransformersModelConfig(
model=model
),
input_class=TransformersImageModelInput,
output_class=TransformersLogitsOutput,
)See:
Methods and properties
__init__
Arguments:
TransformersVisualQandAOutput
__init__
Arguments:
TransformersVisualQandAMultianswerOutput
__init__
Arguments:
VisualQandAPreprocessor
__init__
Arguments:
execute
Arguments:
Returns:
VisualQandASingleAnswerPostprocessor
execute
Arguments:
Returns:
VisualQandAMultianswerPostprocessor
__init__
Arguments:
execute
Arguments:
Returns:
Last updated