Skip to content

DJMuRo4ever/Prompt_Eval_LLM_Judge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 

Repository files navigation

πŸš€ Welcome to Prompt_Eval_LLM_Judge

Prompt Evaluation

Repository Name:

Prompt_Eval_LLM_Judge

Description:

This repository focuses on Prompt Design and LLM Judge, providing tools and resources for various prompting techniques and evaluation methods.

Topics:

  • contrastive-cot-prompting
  • cot-prompting
  • few-shot-prompting
  • llm-judge
  • llms
  • one-shot-prompting
  • prompt-engineering
  • role-playing-prompting
  • self-consistency-prompting
  • trec-rag-2024
  • zero-shot-prompting

πŸ“ Download Release v1.0.0

Download Release v1.0.0

(File needs to be launched after download)


🌟 Features

1. Contrastive CoT Prompting

Utilize contrastive prompts to enhance the performance of language models through the Contrastive CoT Prompting technique.

2. Role-Playing Prompting

Engage in role-playing prompt generation for better understanding and evaluation of Language Model outputs.

3. Self-Consistency Prompting

Implement self-consistency prompts to evaluate the consistency and reliability of Language Model responses.

4. Few-Shot Prompting

Explore few-shot prompting methods to improve the ability of Language Models to generalize with limited examples.

5. Zero-Shot Prompting

Enhance zero-shot capabilities through specialized prompting approaches to enable Language Models to perform tasks without specific training.


πŸš€ Get Started

Prerequisites

  • Python 3.6+
  • PyTorch
  • Transformers

Installation

pip install prompt-eval-llm-judge

Usage

  1. Import the necessary modules.
from prompt_eval_llm_judge import CoTPrompt, RolePlayingPrompt
  1. Create prompts using different techniques.
cot_prompt = CoTPrompt("positive", "negative")
role_playing_prompt = RolePlayingPrompt("character name", "scenario")
  1. Evaluate Language Model outputs using the generated prompts.

πŸ“š Resources

Additional Reading

Community

Join our community on Discord to discuss prompt engineering, evaluation techniques, and more!


🀝 Contribution

  1. Fork the repository
  2. Create a new branch (git checkout -b feature)
  3. Make your changes
  4. Commit your changes (git commit -am 'Add new feature')
  5. Push to the branch (git push origin feature)
  6. Create a new Pull Request

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.


Thank you for visiting Prompt_Eval_LLM_Judge! 🌟

Prompt Evaluation