Models can also be exported to a format like ONNX and TorchScript for deployment in production environments. This provides the flexibility to use a different framework at each stage of a model’s life train a model in three lines of code in one framework, and load it for inference in another. □ Transformers support framework interoperability between PyTorch, TensorFlow, and JAX. □ Multimodal: table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering. □️ Audio: automatic speech recognition and audio classification. □️ Computer Vision: image classification, object detection, and segmentation. □ Natural Language Processing: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation. These models support common tasks in different modalities, such as: Using pretrained models can reduce your compute costs, carbon footprint, and save you the time and resources required to train a model from scratch. □ Transformers provides APIs and tools to easily download and train state-of-the-art pretrained models. State-of-the-art Machine Learning for PyTorch, TensorFlow, and JAX.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |