peleke-1

Learning the Language of Antibodies

We’re proud to introduce peleke-1, a series of fine-tuned protein language models that generate targeted antibody sequences. Simply give it an antigen sequence.

(Oh, and it’s fully open-source and GPL-3 licensed.)

Our Latest Models.

  • Based on the 14B parameter model from Microsoft, this model handles large antigen inputs for a relatively small model size.

  • Llama 3.3 from Meta is a multilingual instruction-tuned generative model with 70B parameters. This model generates antibody sequences based on an instruction-style modality.

    Coming Soon…

  • Qwen3 offers a comprehensive suite of dense and mixture-of-experts (MoE) models, which work well for antibody generation.

    Coming Soon…

How We Built peleke-1.

Data Curation

Add your pricing strategy. Be sure to include important details like value, length of service, and why it’s unique.

Model Tuning

Add your pricing strategy. Be sure to include important details like value, length of service, and why it’s unique.

Output Evaluation

Add your pricing strategy. Be sure to include important details like value, length of service, and why it’s unique.