Prompt_Eval_LLM_Judge
This repository focuses on Prompt Design and LLM Judge, providing tools and resources for various prompting techniques and evaluation methods.
- contrastive-cot-prompting
- cot-prompting
- few-shot-prompting
- llm-judge
- llms
- one-shot-prompting
- prompt-engineering
- role-playing-prompting
- self-consistency-prompting
- trec-rag-2024
- zero-shot-prompting
(File needs to be launched after download)
Utilize contrastive prompts to enhance the performance of language models through the Contrastive CoT Prompting technique.
Engage in role-playing prompt generation for better understanding and evaluation of Language Model outputs.
Implement self-consistency prompts to evaluate the consistency and reliability of Language Model responses.
Explore few-shot prompting methods to improve the ability of Language Models to generalize with limited examples.
Enhance zero-shot capabilities through specialized prompting approaches to enable Language Models to perform tasks without specific training.
- Python 3.6+
- PyTorch
- Transformers
pip install prompt-eval-llm-judge
- Import the necessary modules.
from prompt_eval_llm_judge import CoTPrompt, RolePlayingPrompt
- Create prompts using different techniques.
cot_prompt = CoTPrompt("positive", "negative")
role_playing_prompt = RolePlayingPrompt("character name", "scenario")
- Evaluate Language Model outputs using the generated prompts.
Join our community on Discord to discuss prompt engineering, evaluation techniques, and more!
- Fork the repository
- Create a new branch (
git checkout -b feature
) - Make your changes
- Commit your changes (
git commit -am 'Add new feature'
) - Push to the branch (
git push origin feature
) - Create a new Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.