TransformersChat
Basic chat task
Module: implementation.tasks
Default predictor
TransformersPipeline(
TransformersPipelineConfig(
task="text-generation",
model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
kwargs={
"max_new_tokens": 256,
"do_sample": True,
"temperature": 0.3,
"top_k": 50,
"top_p": 0.95,
}
),
input_class=TransformersBasicInput,
output_class=TransformersBasicOutput
)See:
Methods and properties
__init__
Arguments:
ChatPreprocessor
execute
Arguments:
Returns:
ChatPostprocessor
execute
Arguments:
Returns:
Last updated