• Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Secure remote access solution to your private network, in the cloud or on-prem. Icon
    Secure remote access solution to your private network, in the cloud or on-prem.

    Deliver secure remote access with OpenVPN.

    OpenVPN is here to bring simple, flexible, and cost-effective secure remote access to companies of all sizes, regardless of where their resources are located.
    Get started — no credit card required.
  • 1
    whisper-large-v3-turbo

    whisper-large-v3-turbo

    Whisper-large-v3-turbo delivers fast, multilingual speech recognition

    Whisper-large-v3-turbo is a high-performance automatic speech recognition (ASR) and translation model developed by OpenAI, based on a pruned version of Whisper large-v3. It reduces decoding layers from 32 to 4, offering significantly faster inference with only minor degradation in accuracy. Trained on over 5 million hours of multilingual data, it handles speech transcription, translation, and language identification across 99 languages. It supports advanced decoding strategies like beam search, temperature fallback, and timestamp prediction. Whisper-large-v3-turbo works with long-form and real-time audio, with chunked or sequential inference options. Optimizations such as torch.compile, Flash Attention 2, and SDPA are available to enhance performance. Despite being smaller than v3, it retains strong robustness to accents, noise, and zero-shot tasks, making it ideal for scalable, multilingual ASR use cases.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    xlm-roberta-base

    xlm-roberta-base

    Multilingual RoBERTa trained on 100 languages for NLP tasks

    xlm-roberta-base is a multilingual transformer model trained by Facebook AI on 2.5TB of filtered CommonCrawl data spanning 100 languages. It is based on the RoBERTa architecture and pre-trained using a masked language modeling (MLM) objective. Unlike models like GPT, which predict the next word, this model learns bidirectional context by predicting masked tokens, enabling robust sentence-level representations. xlm-roberta-base is particularly suited for cross-lingual understanding and classification tasks, offering strong performance on benchmarks across languages. It supports use in PyTorch, TensorFlow, JAX, and ONNX, and is best utilized when fine-tuned for downstream applications such as sentiment analysis, named entity recognition, or question answering.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    xlm-roberta-large

    xlm-roberta-large

    Large multilingual RoBERTa model trained on 100 languages

    xlm-roberta-large is a multilingual transformer model pre-trained by Facebook AI on 2.5TB of filtered CommonCrawl data covering 100 languages. It is a large-sized version of XLM-RoBERTa, built on the RoBERTa architecture with enhanced multilingual capabilities. The model was trained using the masked language modeling (MLM) objective, where 15% of tokens are masked and predicted, enabling bidirectional context understanding. Unlike autoregressive models, it processes input holistically, capturing rich cross-lingual semantics. Its main use is to be fine-tuned on downstream tasks such as classification, NER, or question answering in diverse languages. While powerful in multilingual NLP, it is not designed for text generation tasks. With over 561M parameters, the model offers robust performance on tasks spanning numerous linguistic contexts and scripts.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    yolo-world-mirror

    yolo-world-mirror

    Mirror of Ultralytics YOLO-World model weights for object detection

    yolo-world-mirror is a hosted mirror of the model weights for YOLO-World, a variation of the YOLO (You Only Look Once) object detection architecture, designed and maintained by Ultralytics. This Hugging Face repository by Bingsu provides easy access to the pre-trained weights used in YOLO-World, supporting a range of visual tasks. YOLO-World expands the object detection framework to handle open-vocabulary detection, where the model can detect novel object classes based on textual input descriptions. These weights are compatible with Ultralytics’ tooling and documentation, making it easier for developers to deploy or fine-tune the model. The mirror allows users to work with YOLO-World models through a centralized platform without downloading from alternate sources. It enables flexible integration with Ultralytics’ Python API or CLI tools for real-time and high-performance object detection tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • MongoDB Atlas | Run databases anywhere Icon
    MongoDB Atlas | Run databases anywhere

    Ensure the availability of your data with coverage across AWS, Azure, and GCP on MongoDB Atlas—the multi-cloud database for every enterprise.

    MongoDB Atlas allows you to build and run modern applications across 125+ cloud regions, spanning AWS, Azure, and Google Cloud. Its multi-cloud clusters enable seamless data distribution and automated failover between cloud providers, ensuring high availability and flexibility without added complexity.
    Learn More
  • 5
    ⓍTTS-v2

    ⓍTTS-v2

    Multilingual voice cloning TTS model with 6-second sample support

    ⓍTTS-v2 (XTTS-v2) by Coqui is a powerful multilingual text-to-speech model capable of cloning voices from a short 6-second audio sample. It supports 17 languages and enables high-quality voice generation with emotion, style transfer, and cross-language synthesis. The model introduces major improvements over ⓍTTS-v1, including better prosody, stability, and support for Hungarian and Korean. ⓍTTS-v2 allows interpolation between multiple voice references and generates speech at a 24kHz sampling rate. It's ideal for both inference and fine-tuning, with APIs and command-line tools available. The model powers Coqui Studio and the Coqui API, and can be run locally using Python or through Hugging Face Spaces. Licensed under the Coqui Public Model License, it balances open access with responsible use of generative voice technology.
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.