Knowledgator UTCA
KnowledgatorGitHubDiscord
  • Welcome to UTCA documentation!
  • Quickstart
  • Concepts
    • Components
    • Types of components
    • ExecutionSchema
    • Context
    • Scopes
  • Development and Contribution
    • Contribution
    • Future relises
  • Framework structure
  • Core
    • Component
    • BaseExecutor
    • Action
    • Executable
    • Evaluator
    • Memory management
    • Schemas
    • Exceptions
  • Structural components
    • ExecutionSchema
    • Switch
    • ForEach
    • Filter
    • While
    • Condition
    • BREAK
    • Log
  • Base Actions
    • Flush
    • AddData
    • RenameAttribute
    • RenameAttributeQuery
    • SetValue
    • UnpackValue
    • NestToKey
    • ExecuteFunction
  • Predictors
    • Predictor
    • Transformers predictors
    • Transformers schemas
    • TokenSearcherPredictor
    • ComprehendItPredictor
    • GLiNERPredictor
    • OpenAIChatGPTPredictor
    • OpenAIWhisperPredictor
  • Tasks
    • Task
    • ComprehendIt
    • TokenSearcherTextCleaner
    • TokenSearcherNER
    • TokenSearcherQandA
    • TokenSearcherRelationExtraction
    • GLiNER
    • GLiNERRelationExtraction
    • GLiNERQandA
    • OpenAIChat
    • WhisperSpeechToText
    • TransformersTextToSpeech
    • TransformersChartsAndPlotsAnalysis
    • TransformersDocumentQandA
    • TransformersImageClassification
    • TransformersVisualQandA
    • TransformersObjectDetection
    • TransformersTextEmbedding
    • TransformersEntityLinking
    • TransformersTokenClassifier
    • TransformersTextSummarization
    • TransformersTextualQandA
    • TransformersTextClassification
    • TransformersChat
    • Objects
    • Chat tasks utilities
    • Relation extraction tasks utilities
  • Executable Schemas
    • SemanticSearchSchema
    • Web2Meaning
    • RequestsHTML
  • Datasources
    • Audio
    • DB
      • SQL
      • Neo4j
      • Chroma
      • Qdrant
    • Google Documents
    • Google Sheets
    • Image
    • Index
    • JSON
    • PDF
    • Plain text
    • Video
  • Conditions
    • RePattern
    • SemanticCondition
  • APIs
    • GoogleCloudClient
  • Integrations
    • Google Cloud
  • Examples
    • Basic image classification
    • Text to speech
    • PDF document processing
Powered by GitBook
On this page
  • Module: implementation.predictors
  • TransformersBasicInput
  • __init__
  • TransformersBasicOutput
  • __init__
  • TransformersLogitsOutput
  • __ init__
  • TransformersImageClassificationModelInput
  • __init__
  • TransformersTextToSpeechInput
  • __init__
  • TransformersTextToSpeechOutput
  • __init__
  • TransformersChartsAndPlotsModelInput
  • __init__
  • TransformersVisualQandAInput
  • __init__
  • TransformersImageModelInput
  • __init__
  • TransformersEmbeddingInput
  • __init__
  • TransformersEmbeddingOutput
  • __init__
  • TransformersEntityLinkingInput
  • __init__
  • TransformersEntityLinkingOutput
  • __init__
  • TransformersTextualQandAInput
  • __init__
  • TransformersTextualQandAOutput
  • __init__
  • TransformersDETROutput
  • __init__
  1. Predictors

Transformers schemas

Predefined inputs and outputs of transformers predictors

PreviousTransformers predictorsNextTokenSearcherPredictor

Last updated 1 year ago

Module: .predictors



TransformersBasicInput

Subclass of.


__init__

Arguments:

  • inputs (Any)




TransformersBasicOutput

Subclass of.


__init__

Arguments:

  • output (Any)




TransformersLogitsOutput


__ init__

Arguments:

  • logits (Any)




TransformersImageClassificationModelInput


__init__

Arguments:

  • pixel_values (Any): Image representation.




TransformersTextToSpeechInput


__init__

Arguments:

  • text_inputs (str): Text to process.




TransformersTextToSpeechOutput


__init__

Arguments:

  • audio (Any): Audio data.

  • sampling_rate (int): Samling rate.




TransformersChartsAndPlotsModelInput


__init__

Arguments:

  • flattened_patches (Any)

  • attention_mask (Any)




TransformersVisualQandAInput


__init__

Arguments:

  • image (Image.Image): Input image.

  • question (str)




TransformersImageModelInput


__init__

Arguments:

  • input_ids (Any)

  • token_type_ids (Any)

  • attention_mask (Any)

  • pixel_values (Any)

  • pixel_mask (Any)




TransformersEmbeddingInput


__init__

Arguments:

  • encodings (Any)




TransformersEmbeddingOutput


__init__

Arguments:

  • last_hidden_state (Any)




TransformersEntityLinkingInput


__init__

Arguments:

  • encodings (Any)

  • num_beams (int)

  • num_return_sequences (int)

  • prefix_allowed_tokens_fn (Callable[[torch.Tensor, int], List[int]])




TransformersEntityLinkingOutput


__init__

Arguments:

  • sequences (Any)

  • sequences_scores (Optional[Any], optional): Defaults to None.




TransformersTextualQandAInput


__init__

Arguments:

  • question (str)

  • context (str)




TransformersTextualQandAOutput


__init__

Arguments:

  • answer (Optional[str], optional): Defaults to None.

  • score (float): Defaults to 0.




TransformersDETROutput


__init__

Arguments:

  • pred_boxes (Any)

  • logits (Any)



Subclass of.

Subclass of.

Subclass of.

Subclass of.

Subclass of.

Subclass of.

Subclass of.

Subclass of.

Subclass of.

Subclass of.

Subclass of.

Subclass of.

Subclass of.

Subclass of.

IOModel
IOModel
IOModel
IOModel
IOModel
IOModel
IOModel
IOModel
IOModel
IOModel
IOModel
IOModel
IOModel
IOModel
IOModel
IOModel
implementation